AI Summarization

12 features

Generate summaries and extract actionable insights using local LLM models.

How It Works

After transcription, EdgeNote AI uses a local Large Language Model (LLM) to analyze the text and generate structured summaries with actionable insights.

Key Points

Main topics discussed

Action Items

Tasks to complete

Decisions

Choices made

Goals

Objectives set

LLM Models

Choose a model based on summary quality needs and available RAM.

ModelSizeRAM RequiredNotes
Qwen 3 1.7B
1.4 GB2 GBLightweight and fast
DeepSeek R1 1.5B
1.1 GB2 GBFast with reasoning
Gemma 2 2B
1.6 GB4 GBCompact and efficient
Phi-3.5 Mini 3.8B
2.4 GB4 GBBalanced performance
Qwen 3 4BRecommended
2.6 GB4 GBBest for most desktops
DeepSeek R1 7B
4.7 GB12 GBExcellent reasoning
Qwen 3 8B
5.2 GB16 GBSuperior quality
Phi-4 14B
9.1 GB16 GBState-of-the-art

Summary Output

Summaries are structured and organized for quick review. The output includes:

Overview

Always included

A concise summary of the entire recording, capturing the main purpose and outcomes.

Key Points

Always included

Bullet points highlighting the most important topics, facts, and discussions.

Action Items

Extracted automatically

Tasks mentioned during the conversation with assigned owners when identified.

Decisions

Extracted automatically

Choices and agreements made during the meeting for future reference.

Goals

Extracted automatically

Strategic objectives and targets discussed or set during the conversation.

Complete Summary View
A complete AI-generated summary with all sections

Templates

Customize the summary output using templates. Choose from built-in templates or create your own.

Meeting Notes

Optimized for meetings with attendees, agenda, decisions, and follow-ups.

Lecture Notes

Academic style with topics, definitions, and key takeaways.

Interview

Q&A format with candidate evaluation and highlights.

See Custom Templates to create your own summary formats.

Advanced Settings

Temperature

Controls creativity in summaries. Lower values (0.1-0.3) produce more focused, factual output. Higher values (0.7-1.0) allow more creative interpretation.

Auto-Summarize

Automatically generate summaries after transcription completes. Disable to manually trigger.

Parallel Inference

Process multiple summary sections simultaneously for faster output. Requires more RAM.