Learning Spanish from a robot could soon get a lot easier.
Language app Duolingo last week launched a new subscription tier, called Duolingo Max, that uses an artificially intelligent chatbot to provide more personalized feedback to learners. That means that instead of being given pre-set responses, they will communicate directly with a bot powered by what's called a large language model (LLM).
Duolingo Max offers two new features: The first, Explain My Answer, uses the chatbot to give in-depth, grammatical explanations when a user makes a mistake. The second, Roleplay, allows users to chat with human-like characters in real-world settings such as an airport or cafe.
Both features use the latest version of OpenAI's chatbot, ChatGPT-4, which many are touting as a major step-up from the previous version that was made available to the public in late 2022. Duolingo had access to ChatGPT-4 before most of the public even had access to ChatGPT-3.
"We were really just blown away by the capabilities it had, and we immediately spun up a team building out our first features to be powered by GPT-4," Klinton Bicknell, head of AI for Duolingo, told Cheddar News.
But Duolingo is not a newcomer to artificial intelligence. It's been using the technology since it launched in 2011, and ChatGPT in particular for the last three years.
For much of its history as a company, however, Duolingo developed most of its AI applications in-house using open-source models. It only started using ChatGPT when it became clear that it was hard to beat the scale of what OpenAI was doing with more advanced AI models.
Bicknell broke it down like this: "In the language space, there has been a trend in AI for a while where the best solutions tend to involve a two-step process. The first part of that process involves scraping the entire internet for text and is prohibitively expensive for smaller companies. The second part of the process is building industry-specific applications."
As the technology evolved, it became more cost-effective to pay OpenAI for access to ChatGPT rather than build its own large language model. Spurring this on was the fact that OpenAI's economy of scale was making its product more affordable.
"At this point, for GPT-3 and now especially for GPT-4, doing that first step is just so expensive that it's just not practical for moderately sized companies," Bicknell said.