LumiCut
  • Features
  • Pricing
  • Help

Privacy Policy

Effective date: February 21, 2026

LumiCut ("the App") is a video editing application for Apple platforms (macOS, iOS, and iPadOS). Your privacy is fundamental to how we build the App. This policy explains what data we access, how we use it, and what we don't do.

Summary: No Data Leaves Your Device

LumiCut does NOT use any third-party AI services, cloud AI, or external servers. All AI and machine learning features, including subtitle generation (WhisperKit/CoreML), video analysis (Apple Vision), audio classification (Apple SoundAnalysis), and text-to-speech (Piper TTS/ONNX Runtime), run entirely on your device using bundled models. No video, audio, images, text, or analysis results are ever sent to any server or third party. The app works fully offline with no internet connection required for any AI feature.

No Tracking

The App does not track you. We do not use advertising identifiers, fingerprinting, or any other tracking technology. Our Apple privacy manifest declares NSPrivacyTracking = false with no tracking domains.

No Data Collection

We do not collect, transmit, or store any personal data. Our Apple privacy manifest declares an empty NSPrivacyCollectedDataTypes array. There are no analytics, crash reporting, or telemetry services in the App.

On-Device Processing: No Third-Party AI Services

All video editing, compositing, and AI-powered features run entirely on your device. The app does NOT use any third-party AI service, cloud-based AI, or external machine learning API. No user data, including video, audio, images, or text, is ever sent to any external server. The app works fully offline; no internet connection is required for any AI feature.

Every AI/ML model used by the app is bundled within the app binary and executes locally on the device's CPU, GPU, or Neural Engine:

  • Subtitle generation uses WhisperKit (open-source, MIT license), which runs locally via Apple's CoreML framework on the Neural Engine and GPU. The speech recognition model (whisper tiny.en / base.en) is bundled in the app. No audio is sent to any server.
  • Auto-framing and person detection use Apple's built-in Vision framework (VNDetectFaceRectanglesRequest, VNDetectHumanBodyPoseRequest), running on-device as part of iOS/macOS.
  • Video processing uses Metal GPU acceleration on your device's hardware.
  • AI Video Coach analyzes video quality, engagement, structure, and accessibility using only Apple's built-in on-device frameworks (Vision, SoundAnalysis, NaturalLanguage) and bundled scoring algorithms. No data is sent externally. See the "AI Video Coach" section below for details.
  • Text-to-speech narration uses Piper TTS (open-source, MIT license), running locally via ONNX Runtime with voice models bundled in the app. No text or audio is sent to any server.

AI Video Coach

The AI Video Coach feature analyzes your videos to provide quality scores and recommendations. All analysis is performed entirely on your device using Apple's built-in system frameworks. No third-party AI service is used, and no video, audio, or analysis results are sent to any server.

The specific on-device frameworks and techniques used:

  • Video frames: analyzed using Apple's Vision framework (VNDetectFaceRectanglesRequest, VNRecognizeTextRequest, VNImageRequestHandler) for face detection, text recognition, brightness, sharpness, and stability. All processing runs on the device's Neural Engine/GPU.
  • Audio: analyzed using Apple's SoundAnalysis framework (SNClassifySoundRequest) and AVAudioEngine for speech classification, voice energy, speaking pace, filler words, and audio balance. Speech transcription uses WhisperKit (on-device CoreML). No audio leaves the device.
  • Structure & Language: analyzed using Apple's NaturalLanguage framework and bundled scoring algorithms to evaluate hook, pacing, call-to-action, and outro quality.

Scores and recommendations are generated by automated on-device algorithms. They are stored locally on your device (in UserDefaults/cache) and are never transmitted externally.

Speech Recognition

Subtitle generation uses WhisperKit, an open-source on-device speech recognition library built on Apple's CoreML framework. All model files are bundled within the app. No download or internet connection is required. All transcription is performed locally on your device's Neural Engine and GPU. No audio data is sent to any server.

On iOS, for non-English languages, the App may use Apple's on-device SFSpeechRecognizer with requiresOnDeviceRecognition enabled, ensuring all transcription remains on your device.

Local Storage

The App stores your preferences (such as export settings, onboarding state, and AI Coach report history) in UserDefaults. This data stays on your device and is never transmitted.

File Access

The App accesses video and image files only when you explicitly select them via the system file picker. The App does not access files in the background or scan your file system.

Microphone

The App may request microphone access for voiceover recording. Microphone audio is processed locally and is never transmitted to any server.

Payments

Subscriptions are handled entirely by Apple through the App Store. The App never receives, processes, or stores any payment information, credit card numbers, or billing details.

Third-Party Services

The App does not include any third-party analytics, advertising, crash reporting, or cloud AI SDKs. The App does NOT use any third-party AI service (such as OpenAI, Google AI, or any cloud-based ML API). The App does not make any network requests to process user data. The only third-party open-source libraries used are:

  • WhisperKit (MIT license): on-device speech recognition running via Apple CoreML. All model weights are bundled in the app binary; no data is sent externally, no download occurs at runtime.
  • Piper TTS (MIT license): on-device text-to-speech synthesis running via ONNX Runtime. All voice model files are bundled in the app binary; no data is sent externally.

Both libraries execute entirely on-device with no network component. All remaining AI features use Apple's built-in system frameworks (Vision, SoundAnalysis, NaturalLanguage, CoreML), which are part of iOS/macOS and run entirely on-device.

Children's Privacy

We do not knowingly collect any data from anyone, including children under 13. Since the App collects no data, it is compliant with COPPA by design.

Changes to This Policy

If we update this policy, we will post the revised version on this page with a new effective date. Since we collect no data, any future changes would be minimal.

Contact

If you have questions about this privacy policy, contact us at support@lumicut.ai.

© 2026 LumiCut
  • Privacy Policy
  • Terms of Service
  • Help