Elevate your self-service assistants with new generative AI features in Amazon Lex | Amazon Web Services

Elevate your self-service assistants with new generative AI features in Amazon Lex | Amazon Web Services

In this post, we talk about how generative AI is changing the conversational AI industry by providing new customer and bot builder experiences, and the new features in Amazon Lex that take advantage of these advances.

As the demand for conversational AI continues to grow, developers are seeking ways to enhance their chatbots with human-like interactions and advanced capabilities such as FAQ handling. Recent breakthroughs in generative AI are leading to significant improvements in natural language understanding that make conversational systems more intelligent. By training large neural network models on datasets with trillions of tokens, AI researchers have developed techniques that allow bots to understand more complex questions, provide nuanced and more natural human-sounding responses, and handle a wide range of topics. With these new generative AI innovations, you can create virtual assistants that feel more natural, intuitive, and helpful during text- or voice-based self-service interactions. The rapid progress in generative AI is bringing automated chatbots and virtual assistants significantly closer to the goal of having truly intelligent, free-flowing conversations. With further advances in deep learning and neural network techniques, conversational systems are poised to become even more flexible, relatable, and human-like. This new generation of AI-powered assistants can provide seamless self-service experiences across a multitude of use cases.

Elevate your self-service assistants with new generative AI features in Amazon Lex | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

How Amazon Bedrock is changing the landscape of conversational AI

Amazon Bedrock is a user-friendly way to build and scale generative AI applications with foundational models (FMs). Amazon Bedrock offers an array of FMs from leading providers, so AWS customers have flexibility and choice to use the best models for their specific use case.

In today’s fast-paced world, we expect quick and efficient customer service from every business. However, providing excellent customer service can be significantly challenging when the volume of inquiries outpaces the human resources employed to address them. Businesses can overcome this challenge efficiently while also providing personalized customer service by taking advantage of advancements in generative AI powered by large language models (LLMs).

Over the years, AWS has invested in democratizing access to—and amplifying the understanding of—AI, machine learning (ML), and generative AI. LLMs can be highly useful in contact centers by providing automated responses to frequently asked questions, analyzing customer sentiment and intents to route calls appropriately, generating summaries of conversations to help agents, and even automatically generating emails or chat responses to common customer inquiries. By handling repetitive tasks and gaining insights from conversations, LLMs allow contact center agents to focus on delivering higher value through personalized service and resolving complex issues.

Improving the customer experience with conversational FAQs

Generative AI has tremendous potential to provide quick, reliable answers to commonly asked customer questions in a conversational manner. With access to authorized knowledge sources and LLMs, your existing Amazon Lex bot can provide helpful, natural, and accurate responses to FAQs, going beyond task-oriented dialogue. Our Retrieval Augmented Generation (RAG) approach allows Amazon Lex to harness both the breadth of knowledge available in repositories as well as the fluency of LLMs. You can simply ask your question in free-form, conversational language, and receive a natural, tailored response within seconds. The new conversational FAQ feature in Amazon Lex allows bot developers and conversation designers to focus on defining business logic rather than designing exhaustive FAQ-based conversation flows within a bot.

We are introducing a built-in QnAIntent that uses an LLM to query an authorized knowledge source and provide a meaningful and contextual response. In addition, developers can configure the QnAIntent to point to specific knowledge base sections, ensuring only specific portions of the knowledge content is queried at runtime to fulfill user requests. This capability fulfills the need for highly regulated industries, such as financial services and healthcare, to only provide responses in compliant language. The conversational FAQ feature in Amazon Lex allows organizations to improve containment rates while avoiding the high costs of missed queries and human representative transfers.

Elevate your self-service assistants with new generative AI features in Amazon Lex | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Building an Amazon Lex bot using the descriptive bot builder

Building conversational bots from scratch is a time-consuming process that requires deep knowledge of how users interact with bots in order to anticipate potential requests and code appropriate responses. Today, conversation designers and developers spend many days writing code to help run all possible user actions (intents), the various ways users phrase their requests (utterances), and the information needed from the user to complete those actions (slots).

The new descriptive bot building feature in Amazon Lex uses generative AI to accelerate the bot building process. Instead of writing code, conversation designers and bot developers can now describe in plain English what they want the bot to accomplish (for example, “Take reservations for my hotel using name and contact info, travel dates, room type, and payment info”). Using only this simple prompt, Amazon Lex will automatically generate intents, training utterances, slots, prompts, and a conversational flow to bring the described bot to life. By providing a baseline bot design, this feature immensely reduces the time and complexity of building conversational chatbots, allowing the builder to reprioritize effort on fine-tuning the conversational experience.

By tapping into the power of generative AI with LLMs, Amazon Lex enables developers and non-technical users to build bots simply by describing their goal. Rather than meticulously coding intents, utterances, slots, and so on, developers can provide a natural language prompt and Amazon Lex will automatically generate a basic bot flow ready for further refinement. This capability is initially only available in English, but developers can further customize the AI-generated bot as needed before deployment, saving many hours of manual development work.

Improving the user experience with assisted slot resolution

As consumers become more familiar with chatbots and interactive voice response (IVR) systems, they expect higher levels of intelligence baked into self-service experiences. Disambiguating responses that are more conversational is imperative to success as users expect more natural, human-like experiences. With rising consumer confidence in chatbot capabilities, there is also an expectation of elevated performance from natural language understanding (NLU). In the likely scenario that a semantically simple or complex utterance is not resolved properly to a slot, user confidence can dwindle. In such instances, an LLM can dynamically assist the existing Amazon Lex NLU model and ensure accurate slot resolution even when the user utterance is beyond the bounds of the slot model. In Amazon Lex, the assisted slot resolution feature provides the bot developer yet another tool for which to increase containment.

During runtime, when NLU fails to resolve a slot during a conversational turn, Amazon Lex will call the LLM selected by the bot developer to assist with resolving the slot. If the LLM is able to provide a value upon slot retry, the user can continue with the conversation as normal. For example, if upon slot retry, a bot asks “What city does the policy holder reside in?” and the user responds “I live in Springfield,” the LLM will be able to resolve the value to “Springfield.” The supported slot types for this feature include AMAZON.City, AMAZON.Country, AMAZON.Number, AMAZON.Date, AMAZON.AlphaNumeric (without regex) and AMAZON.PhoneNumber, and AMAZON.Confirmation. This feature is only available in English at the time of writing.

Improving the builder experience with training utterance generation

One of the pain points that bot builders and conversational designers often encounter is anticipating the variation and diversity of responses when invoking an intent or soliciting slot information. When a bot developer creates a new intent, sample utterances must be provided to train the ML model on the types of responses it can and should accept. It can often be difficult to anticipate the permutations on verbiage and syntax used by customers. With utterance generation, Amazon Lex uses foundational models such as Amazon Titan to generate training utterances with just one click, without the need for any prompt engineering.

Utterance generation uses the intent name, existing utterances, and optionally the intent description to generate new utterances with an LLM. Bot developers and conversational designers can edit or delete the generated utterances before accepting them. This feature works with both new and existing intents.

Conclusion

Recent advancements in generative AI have undoubtedly made automated consumer experiences better. With Amazon Lex, we are committed to infusing generative AI into every aspect of the builder and user experience. The features mentioned in this post are just the beginning—and we can’t wait to show you what is to come.

To learn more, refer to Amazon Lex Documentation, and try these features out on the Amazon Lex console.


About the authors

Elevate your self-service assistants with new generative AI features in Amazon Lex | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.Anuradha Durfee is a Senior Product Manager on the Amazon Lex team and has more than 7 years of experience in conversational AI. She is fascinated by voice user interfaces and making technology more accessible through intuitive design.

Elevate your self-service assistants with new generative AI features in Amazon Lex | Amazon Web Services PlatoBlockchain Data Intelligence. Vertical Search. Ai.Sandeep Srinivasan is a Senior Product Manager on the Amazon Lex team. As a keen observer of human behavior, he is passionate about customer experience. He spends his waking hours at the intersection of people, technology, and the future.

Time Stamp:

More from AWS Machine Learning