LLM Integration Examples
Integrate Cancelable with Large Language Model streaming operations.
Find the complete source in examples/06_llm/.
LLM Streaming with Pause/Resume
Stream AI responses with user and AI-initiated cancelation.
Features
- Keyboard Control - Press SPACE to pause/resume streaming
- LLM-Initiated Pause - AI signals when to pause with special markers
- Context Preservation - Resume from exact position
- CancelationToken Integration - Clean cancelation handling
Example
from hother.cancelable import CancelationToken
token = CancelationToken()
async def stream_llm():
async with Cancelable.with_token(token) as cancel:
async for chunk in llm_client.stream(prompt):
# Check for pause markers
if '!!CANCEL' in chunk:
token.cancel_sync(message="LLM initiated pause")
break
print(chunk, end='', flush=True)
Run it:
Use Cases
- Interactive Tutorials - LLM pauses for user to try examples
- Content Generation - User controls output generation
- Cost Control - Stop expensive API calls
- Reasoning Steps - AI pauses between thinking steps
Requirements
Next Steps
- Review Basic Patterns - Fundamental cancelation
- Explore Web Applications - API integration
- See Stream Processing - Async stream handling