Desktop Demo

Download, install, and run AuraCoreCF locally.

This page serves as the distribution and setup surface for the AuraCoreCF desktop demo. It is designed for evaluators, testers, and technical reviewers who need a direct way to install the application, configure inference, and operate the system in a controlled local environment.

AuraCoreCF is a local-first desktop application. It can run with a local model through Ollama or with a cloud model through a supported API provider, depending on how model settings are configured after installation.

Experimental System Notice

This is an evaluation build.

Important notice. The AuraCoreCF desktop package distributed on this page is a demonstration build intended for evaluation, testing, and research review. It exposes the core runtime environment of the system but should not be considered a finalized production release. Because this build is provided for demonstration purposes, some features may be incomplete, unstable, or behave differently across environments. Certain capabilities are still under active development and may change between versions. Voice interaction is currently experimental and may fail to initialize, disconnect unexpectedly, or behave inconsistently depending on system configuration and audio device support. Aura is also designed to evolve through interaction. Early sessions may appear less capable because the system has not yet accumulated contextual memory, trust signals, or continuity records. As interaction continues, the system’s internal state grows and responses become more consistent with the user’s patterns and context. Testers should therefore expect occasional errors, configuration issues, or unexpected behaviors while evaluating the demo.
Download package

Get the Windows demo build.

Download the current desktop package, extract it locally, and launch the application from the unzipped folder.

Version 1.0 Demo · Windows x64 · ZIP package

System support
Supported platform. Windows 10 and Windows 11.
Supported architectures. x64.
Not currently shipped. macOS and Linux demo packages are not currently provided.
Build scope. The broader project includes cross-platform pieces, but this demo package and setup flow are tuned for Windows.
Windows-specific integration. Some integrations in this build are Windows-specific, including portions of authentication wiring and local service bootstrap behavior.
Known limitations

Current demo constraints

Voice interaction. Voice support is experimental and may not function reliably on all systems. Audio device differences and driver behavior may affect initialization.
Model variability. Behavior may differ significantly depending on the selected model and provider configuration.
Local environment differences. Hardware configuration, OS services, and local model availability may affect runtime behavior.
Feature completeness. Some subsystems present in the architecture may not be fully exposed in this demo build.
Evaluation goals

What testers should evaluate

System behavior. Evaluate how Aura responds across longer interactions rather than single prompt exchanges.
Context continuity. Observe how the system maintains coherence across multiple turns and topics.
Model configuration. Test both local Ollama models and cloud providers to compare runtime behavior.
Stability. Identify configuration issues, runtime errors, or edge cases encountered during extended sessions.

What the demo is

The desktop demo provides a direct way to evaluate AuraCoreCF as an application rather than as a static description. It exposes the runtime surface needed to install the system, configure inference, and operate the application in a local environment.

The demo supports both local and cloud inference paths so the application can be evaluated against the model environment that best matches the user’s setup.

Default model behavior

If model settings are left empty, Aura defaults to a local Ollama workflow using the model deepseek-r1:latest. This allows the demo to fall back to a local-first configuration without requiring a cloud provider.

Users can override this behavior at any time by opening model settings and configuring either a local Ollama model or a cloud API provider.

Install guide

Install the Windows demo

01

Download and extract

Download the demo package and unzip it to a local folder on your Windows system.

02

Run the executable

Launch AuraCoreCF.exe from the extracted package directory.

03

Approve Windows prompt

If Windows SmartScreen appears, select More info and then Run anyway.

04

Sign in and accept terms

Create an account or sign in, then accept the EULA to complete first launch.

Model settings

Open the model configuration panel

After installation, open the model configuration area inside the application to select an inference source, configure model settings, and verify the active provider.

Step 1. Open Preferences.
Step 2. Go to AI Model.
Step 3. Configure provider and model settings.
Step 4. Click Save Model Settings.
Step 5. Click Test Active Provider.
Inference modes
Auto (Recommended). Uses cloud if both API key and model are present; otherwise uses local Ollama.
Local Ollama. Always uses local Ollama at the configured local URL.
Cloud API. Always uses a configured cloud provider.

Local Ollama setup

  • Set Inference source to Local Ollama, or leave it on Auto with no API key configured.
  • In Model, enter your local model name, for example deepseek-r1:latest.
  • In Local Ollama, confirm the URL. The default is http://127.0.0.1:11434.
  • Click Refresh Ollama Models.
  • Click Save Model Settings.

Cloud provider setup

  • Set Inference source to Cloud API, or use Auto.
  • Select the Cloud provider.
  • In Model, enter the exact provider model ID.
  • Paste your key into API key.
  • Confirm the correct API base URL.
  • Click Save Model Settings.
  • Click Test Active Provider.
Common API base URLs

Supported cloud connection patterns

When using a cloud provider, the API base URL must match the provider’s expected endpoint format. OpenAI-compatible providers must expose an OpenAI-style API surface.

OpenAIhttps://api.openai.com/v1
Anthropichttps://api.anthropic.com/v1
OpenAI-compatibleYour provider URL
Provider selection logic

How Aura chooses the active provider

Provider selection follows a fixed runtime rule. If Inference source is set to Cloud API, Aura uses cloud only. If it is set to Local Ollama, Aura uses local only. If it is set to Auto, Aura uses cloud when both an API key and model are present; otherwise it uses local Ollama.

The bottom-left status bar inside the application shows the active provider and model, for example Ollama: deepseek-r1:latest or OpenAI: gpt-4o.

Cloud APICloud only
Local OllamaLocal only
AutoCloud if configured, else local
Support workflow

Logs, troubleshooting, and support information

Export logs

Export error logs for support

If the demo encounters an issue, open Preferences, go to Privacy, and use Export Logs under the error log export area. Save the JSON file and include it with the support report.

Runtime logs are also written locally to .../AppData/Roaming/AuraCoreCF/logs/aura-errors.jsonl in installed builds. Electron userData location may vary by packaging mode.

Quick troubleshooting

Common issues

  • Cannot reach Ollama. Start Ollama and verify http://127.0.0.1:11434 is reachable.
  • Cloud test failed (401/403). The API key is invalid or lacks permission.
  • Cloud test failed (404). The API base URL is incorrect for that provider.
  • Model not found. Verify the exact model ID spelling in Model.
  • Still using wrong provider. Re-open AI Model, confirm Inference source, click Save Model Settings, then test again.