The State of AI in 2023: 6 trends and 6 actions you can take today
In his LinkedIn newsletter, Ahead by Bett Advisory Board member Dominik Lukes recently shared a detailed update on new trends in generative Artificial Intelligence that have emerged over the last 3 months – roughly since the release of GPT 4. Many of the changes he notes build on top of a slow accumulation of experience and experimentation from the education community, and while these tools are rapidly advancing, they also foreshadow the trends of the future.
We would highly recommend checking out Dominik’s newsletter here for the full story, but below are some of his key tips and headlines.
Lukes divides his six trends of AI trends into two broad categories:
1. New tools and models - both available and announced
2. Developments in ways of understanding and making use of the tools both tools themselves and how we relate to them individually and as society
In each category, Lukes has identified three specific trends which you should know about. These include:
New tools and models
1. No more AI = ChatGPT: From ChatGPT to the Big 4
In his report, Lukes lists four major chatbots available to use: ChatGPT from Open AI, Claude.ai from Anthropic, Bard from Google and Bing Chat from Microsoft. Whilst he closely explores each of these tools in terms of available features and key differences, a major point Lukes raises is the fact that “the universe of tasks we ask these tools to perform is too vast to be able to comprehensively evaluate them”, particularly when the tools regularly give different outputs for the same question. We’re looking forward to hearing more about how to compare these tools as they become more embedded in the community.
2. Open source models and explosion of apps
Generative AI tools have so far mostly been created and funded by large companies who are able to absorb the millions of dollars required to research and develop them. However, Lukes praises the attempts that have been made to create open source alternatives, including the creation of open data sets anyone can use, open models and sharing of techniques. While he’s noted that it’s difficult for these models to keep pace with the “big 4”, Lukes does feel they are important for two reasons: 1, more options for software to integrate AI features and or build new AI products, and 2, expansion of knowledge and research possibilities.
3. New tools for using large language models
Developing large language models and AI chatbots from scratch is complicated and expensive, but Lukes argues that building products on top of them is surprisingly simple. He awards the prize for the biggest developments to the practice of “tools for making tools”. Writing a simple program that will let you upload a PDF and get a summary is now well within reach of almost any developer and even many non-developers who can use ChatGPT to help them write code. Building with AI still requires a lot of knowledge (as shown in Lukes full report), skill and often time for trial and error but this does not mean very deep AI expertise. As Lukes points out, “many of the people behind the 1000s of products we’re seeing are not necessarily experts in AI but rather in all the tools around AI.”
New knowledge
4. New developments in prompt engineering
Lukes’ report argues that “the single biggest factor in the success of ChatGPT over the previous iteration is the simple chat interface. All of a sudden, anybody is able to simply start “talking” to the tool as they would to a person without having to think about how to formulate the prompt.” However, his section on prompt engineering techniques highlights exactly why this effect can also sometimes be a downfall, leading novice users to miss key tricks which help tools like ChatGPT perform more effectively. In his report, Lukes goes into further detail on what is inconsequential in a good prompt (spelling, word order and punctuation) and what really matters (rich meaning and good examples), creating 5 broad categories of prompt engineering techniques to help you learn the basics: personas, examples and models, self-generated examples, self-critique and chain of thought.
5. The rise of a new profession: AI engineer
As Lukes explains it, AI engineers have emerged over the last six months and sit somewhere between the product developer and the large language model builder. While he goes into detail on the kinds of tasks these engineers tend to specialise in (for example, cultivating a deep understanding of how large language models work), he also notes that there has been no opportunity to undergo formal training in this area due to the speed of development. To Lukes, the lack of awareness in all kinds of organisations around the need for AI engineering expertise “shows the gap in knowledge between the highly specialised field of machine learning and AI research and development and traditional software development.” But the good news is that, after all his research, Lukes feels that this kind of expertise is well within reach of anyone willing to specialise in it – the basic skills less technical than they first appear.
6. Clarity about trusted sources of information about AI
As Lukes notes in his report, we live in a confusing time: “Developments keep coming at us at great speed, and as someone recently said to me, it’s hard to find the signal in the noise.” Not only are there no experts with established experience in this new generative AI field, it’s not even clear what an expert would look like. “It is easy to say a series of true statements about Artificial Intelligence and yet completely miss the mark.” However, Lukes has a plan – he advises readers to either become an ‘AI scout’, i.e. someone who evaluates meaningful discourse across a range of platforms and media, or find one or two sources of information appropriate to your level of knowledge and interest and keep an eye on it. Above all, Lukes notes the importance of finding sources of knowledge you can trust – and we think we’ve found that in Dominik’s newsletter!
On top of this, Lukes has collated a short summary (with the help of Claude.ai) of his research, which offers the following top tips:
TLDR; generated by Claude.ai
- AI no longer just means ChatGPT. There are now 4 major AI chatbots: ChatGPT, Claude, Bard, and Bing Chat. They have different strengths.
- More open source AI models like Llama 2 allow more apps and custom models. This expands knowledge and applications.
- New tools like LangChain make building AI apps easier. Explosion of new AI startups.
- Advances in prompt engineering let users get more from AI. Giving examples and asking the AI to self-critique improves results.
- Rise of a new role: the AI engineer. They have specialized skills to build apps using AI models and APIs.
- Trusted sources of AI info are emerging but still fragmented across blogs, preprints, videos. Building a personal learning network is key.
We strongly recommend any AI enthusiasts in our audience check out Dominik’s full post on recent trends in the sector. Lukes has even created a shared Google Doc, in which he has a clickable table of contents and space for readers to suggest edits, additions and improvements in the Google Doc.
Happy reading!
This article is created from a recent report by Dominik Lukes, Assistive Technology Officer at the Centre for Teaching and Learning at the University of Oxford. Dominik is leading on integrating assistive technologies into wider academic practice, and started the Reading and Writing Innovation Lab that collects and reviews various technologies that support academic reading and writing. In the last two years he has delivered training on AI for education and currently working on a guide to ChatGPT for teaching and learning. You can find out more about Dominik here.
Tags
- 2023
- 6
- actions
- AI
- chatgpt
- dominik
- key
- knowledge
- language
- large
- lukes
- models
- more
- new
- notes
- open
- out
- prompt
- report
- research
- s
- state
- take
- today
- tools
- top
- trends
- two
- use