Skip to main content
← Back to Projects

PercaTalk

AI-powered speech training tool that analyzes vocal delivery and provides on-device feedback.

PercaTalk speech analysis interface
RolemacOS Developer
ContextApple Developer Academy
Timeframe2025

About the Project

PercaTalk is a native macOS application designed to reduce public speaking anxiety. Using on device machine learning, it analyzes a user's speech patterns focusing on emotion, pacing, and clarity without ever uploading audio to the cloud. It provides a safe, private environment for users to practice and improve their delivery.

The Problem

Improving public speaking is difficult without a human coach. Users often lack self-awareness regarding their emotional tone or pacing, and recording themselves feels unstructured and unhelpful without objective metrics.

The Solution

We utilized CoreML and AVFoundation to build an offline audio analysis engine. The app records speech, classifies emotional delivery in real-time, and generates a scoring report to give users immediate, actionable feedback.

Key Contributions

  • Integrated CoreML audio classification models for emotion detection
  • Implemented local data persistence with SwiftData to track user progress
  • Built the audio recording and playback pipeline using AVFoundation
  • Implementing the design for the video recording and scoring interface

Technologies

SwiftUICoreMLAVFoundationSwiftData