Last June, we presented a recap of the generative AI market and its projected development. Well, in reality, it evolved much faster than even Moore’s Law would predict. OpenAI’s GPT 4 dominance in the first half of 2023 was quickly contested by Meta and Microsoft’s joint release of the open-source Llama 2 and Llama 2 Long AI, and Google updated its Bard model to an even more powerful Gemini Pro.
From the OpenAI’s C-suite tragicomedy to the EU’s AI act, let’s review the key advancements and shortfalls of the generative AI space.
Hype is Dead, Long Live the Hype
Since April, the excitement over AI and ML on Github has decreased by 13%. ChatGPT’s weekly web visits dropped by almost 25% or 100 million during the summer. In August, Gartner placed generative AI on the ‘Peak of Inflated Expectations’ in its emerging tech hype cycle. Around the same time, George Soros sold his stake in NVIDIA.
Yet, even though AI doesn’t make headlines as it once did, the fundamentals are still there. By November, ChatGPT’s traction had fully returned to spring levels, AI and ML on Github maintained the percentage of total commitments, and NVIDIA reported a 34% revenue growth in Q3 2023.
The demand for AI infrastructure is so high that the supply side is barely keeping pace. Coatue’s AI report estimates that the shipments of global AI servers increased by 38% from 2022 to 2023, while H100 GPUs on the aftermarket traded at an average markup of 40% from the listed price in the second half of the year.
Despite supply bottlenecks, generative AI is benefitting from lower running costs. According to data released by OpenAI, costs per 1,000 tokens decreased from $0.02 for GPT 3 in January to $0.002 in June for GPT 3.5 — a 90% decrease. The costs of training, however, became higher. Not only do new models require larger data sets to fine-tune exponentially increasing parameters, but data is becoming increasingly monetized. In July, both Reddit and X (formerly Twitter) erected paywalls on API and scraping.
Regulators On the Prowl
In December, European regulators agreed on a preliminary deal, positioning the EU as the first major jurisdiction to create a comprehensive regulatory framework for AI. Parliament’s vote on the AI Act, expected in 2024, marks a significant move, particularly as 57% of Americans support a 6-month pause in AI development.
The AI Act sorts AI into four categories based on potential risks for society, and focuses on transparency, data quality, accountability, and human oversight. Despite being fairly generic, and mainly aiming to set up a legal framework that will be strengthened by future laws and changes, the act gives us some insights into how future regulations might work.
Generative AI 2.0
Technological advancements are no longer enough to fuel industry growth. Sequoia dubs the second half of 2023 “Generative AI Act Two,” highlighting the transition from foundational models to consumer-centric multi-model applications.
While productivity is hitting new benchmarks, generative AI is failing to attract consumer attention. Sequoia reveals that incumbent tech companies exhibited a median one-month retention rate of 63%. None of the AI-first sample companies made it to 63%, showing a median retention rate of just 42%; the median daily active users are extremely low, with only a 14% daily and monthly active user rate.
One of the reasons current generative AI applications fail to engage users is the lack of vertical separation. Instead of tailoring to individual needs and focusing on the application and UI layers, companies provide jack-of-all-trades technological solutions to problems that don’t yet exist.
Well, only time will tell if the companies can course correct: stay tuned for our next AI update to find out.