Lesson 4
Bland Models and Architecture
The Three Bland Phone Models
While multiple LLMs are used in Bland’s standard call flow, users can choose between three main call processing models (base, enhanced, and turbo). Each has pro’s and con’s (Turbo is not always a better selection than Base).
Model Differences
While multiple LLMs are used in Bland’s standard call flow, users can choose between three main call processing models (base, enhanced, and turbo). Each has pro’s and con’s (Turbo is not always a better selection than Base).
base
The default model, follows scripts/procedures most effectively
Supports all features and capabilities
Best for custom tools
enhanced
Much faster latency, supports complex conversations
Works best with objective-based prompts
Supports all features and capabilities
turbo
The fastest latency possible, supports sophisticated and nuanced conversations
Limited capabilities currently (excludes transferring, IVR navigation, custom tools)
Extremely realistic conversation capabilities
LLM Architecture for Bland Calls
During the course of a call, your Agent is continuously processing:
The speech from the user to text
The text through multiple LLMs
The text response from the LLMs to speech
During the second step, three different LLMS process the call transcript to help the agent make decisions. These are the Navigational, Conversational, Data Extraction models.
The Navigational Model
The navigational LLM is constantly trying to determine if the Agent should proceed to the next node and which pathway it should take.
The Conversational Model
This LLM is responsible for speech dialog in your conversations. This will determine what’s actually said to the user.
Data Extraction Models
This LLM is responsible for identifying any specified variables in your Conversational Pathways.