Skip to main content
LLM
Add function calling to your Pipecat bot. Examples exist for each LLM provider supported in Pipecat.View Recipe →
Recording & Logging
Collect audio frames from the user and bot for later processing or storage.View Recipe →
Recording & Logging
Capture user and bot transcripts for later processing or storage.View Recipe →
Audio
Play a background sound in your Pipecat bot. The audio is mixed with the transport audio to create a single integrated audio stream.View Recipe →
User Interaction
Specify a strategy for mute to mute user input, allowing the bot to continue without interruption.View Recipe →
User Interaction
Use a wake phrase to wake up your Pipecat bot.View Recipe →
Audio
Play sound effects in your Pipecat bot.View Recipe →
Multilingual
A ParallelPipeline example showing how to dynamically switch languages.View Recipe →
User Interaction
Detect when a user is idle and automatically respond.View Recipe →
Debugging
Learn how to debug your Pipecat bot with an Observer by observing frames flowing through the pipeline.View Recipe →
Debugging
A live graphical debugger for the Pipecat voice and multimodal conversational AI framework. It lets you visualize pipelines and debug frames in real time — so you can see exactly what your bot is thinking and doing.View Recipe →
Recording & Logging
Parse user email from the LLM response.View Recipe →
User Interaction
Detect when a user has finished speaking and automatically respond. Learn more about smart-turn model.View Recipe →
Integration
Use MCP tools to interact with external services.View Recipe →
User Interaction
Learn how to configure interruption strategies for your Pipecat bot.View Recipe →
Vision
Pass a video frame from a live video stream to a model and get a description.View Recipe →
Events
Handle user and bot end of turn events to add custom logic after a turn.View Recipe →

💡 Need Help Getting Started?