Economic Insider

Aligning Data and Models: Lessons Learned from Alexa’s Ineffective Implementation in the Age of Generative AI

Back in the day, every company was vying for Alexa’s attention. Numerous businesses launched Alexa’s “Skills” all at once, from major pizza franchises to top ride-sharing startups. The voice assistant generated a lot of attention thanks to this marketing campaign, but none of the brands that invested time and money in it saw any substantial returns.

Consider our actual usage of the device. Most people probably don’t often ask these virtual assistants to “Alexa, book me a spa session.” As an alternative, humans all tended to stick to straightforward requests like “Alexa, what time is it?” and “Alexa, set a timer for 10 minutes.” Unfortunately, a lot of the companies that attempted to capitalize on Amazon’s voice assistant found that their products were “a huge number of glorified clock radios.”

According to Rob LoCascio, generative AI and large language models (LLMs) have surpassed rules-based assistants like Alexa and Siri, and are expected to revolutionize how marketers interact with their target audiences. Many believe that generative AI will have a significant impact on business, comparable to the invention of the PC. This opinion was recently shared by the Rob on CNBC. To avoid investing time and resources in fruitless endeavors like Alexa, it is also essential to consider the lessons learned from past AI models’ ineffective implementations.

IF MODELS AND DATA ARE NOT ALIGNED, WHAT HAPPENS?

The data model and the data set are the two most crucial components of AI applications. Results are produced using a model and data collection (such as OpenAI’s LLM). A lot of attention is being paid to models right now as a result of high-profile debuts like ChatGPT. However, companies and customers will never achieve their desired results without the proper fusion of models and data sets. 

The quality of generative AI’s outputs or results depends on how well the model underlying them and the data feeding them match. Let’s imagine you work for a healthcare organization and want to offer personalized COVID safety recommendations ahead of the cold and flu season based on the city or state of your consumers. Let’s also assume that you’ve chosen the best model and are confident in its ability to produce words in the appropriate tone of voice, without prejudice toward any particular set of individuals, and even integrate with your backend systems to remind customers to make appointments or assist them in doing so. 

However, you use discussions from the public internet to create the COVID information you send your consumers. If your intention was to produce reliable, practical healthcare advice, you’ve already failed because there is much too much inaccurate material and noise online concerning COVID. The data set may not have matched the model you chose to communicate with your audience. 

Similar to Alexa’s Skills, none of the brands that attempted to employ them ever found success because Amazon was Amazon and only Amazon. The main goal has always been to increase your Amazon purchases rather than to promote other platforms’ businesses. The Amazon use cases remained the model’s primary focus. The objectives that restaurant and transportation brands (and their end users) sought to accomplish with Alexa were never intended to be achieved. 

Share this article

(Ambassador)

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of Economic Insider.