AI Summarization
12 featuresGenerate summaries and extract actionable insights using local LLM models.
How It Works
After transcription, EdgeNote AI uses a local Large Language Model (LLM) to analyze the text and generate structured summaries with actionable insights.
Key Points
Main topics discussed
Action Items
Tasks to complete
Decisions
Choices made
Goals
Objectives set
LLM Models
Choose a model based on summary quality needs and available RAM.
| Model | Size | RAM Required | Notes |
|---|---|---|---|
Qwen 3 1.7B | 1.4 GB | 2 GB | Lightweight and fast |
DeepSeek R1 1.5B | 1.1 GB | 2 GB | Fast with reasoning |
Gemma 2 2B | 1.6 GB | 4 GB | Compact and efficient |
Phi-3.5 Mini 3.8B | 2.4 GB | 4 GB | Balanced performance |
Qwen 3 4BRecommended | 2.6 GB | 4 GB | Best for most desktops |
DeepSeek R1 7B | 4.7 GB | 12 GB | Excellent reasoning |
Qwen 3 8B | 5.2 GB | 16 GB | Superior quality |
Phi-4 14B | 9.1 GB | 16 GB | State-of-the-art |
Model Recommendation
Summary Output
Summaries are structured and organized for quick review. The output includes:
Overview
Always includedA concise summary of the entire recording, capturing the main purpose and outcomes.
Key Points
Always includedBullet points highlighting the most important topics, facts, and discussions.
Action Items
Extracted automaticallyTasks mentioned during the conversation with assigned owners when identified.
Decisions
Extracted automaticallyChoices and agreements made during the meeting for future reference.
Goals
Extracted automaticallyStrategic objectives and targets discussed or set during the conversation.

Templates
Customize the summary output using templates. Choose from built-in templates or create your own.
Optimized for meetings with attendees, agenda, decisions, and follow-ups.
Academic style with topics, definitions, and key takeaways.
Q&A format with candidate evaluation and highlights.
See Custom Templates to create your own summary formats.
Advanced Settings
Temperature
Controls creativity in summaries. Lower values (0.1-0.3) produce more focused, factual output. Higher values (0.7-1.0) allow more creative interpretation.
Auto-Summarize
Automatically generate summaries after transcription completes. Disable to manually trigger.
Parallel Inference
Process multiple summary sections simultaneously for faster output. Requires more RAM.