Download, install, and run AuraCoreCF locally.
This page serves as the distribution and setup surface for the AuraCoreCF desktop demo. It is designed for evaluators, testers, and technical reviewers who need a direct way to install the application, configure inference, and operate the system in a controlled local environment.
AuraCoreCF is a local-first desktop application. It can run with a local model through Ollama or with a cloud model through a supported API provider, depending on how model settings are configured after installation.
This is an evaluation build.
Get the Windows demo build.
Download the current desktop package, extract it locally, and launch the application from the unzipped folder.
Version 1.0 Demo · Windows x64 · ZIP package
Current demo constraints
What testers should evaluate
What the demo is
The desktop demo provides a direct way to evaluate AuraCoreCF as an application rather than as a static description. It exposes the runtime surface needed to install the system, configure inference, and operate the application in a local environment.
The demo supports both local and cloud inference paths so the application can be evaluated against the model environment that best matches the user’s setup.
Default model behavior
If model settings are left empty, Aura defaults to a local Ollama workflow using the model deepseek-r1:latest. This allows the demo to fall back to a local-first configuration without requiring a cloud provider.
Users can override this behavior at any time by opening model settings and configuring either a local Ollama model or a cloud API provider.
Install the Windows demo
Download and extract
Download the demo package and unzip it to a local folder on your Windows system.
Run the executable
Launch AuraCoreCF.exe from the extracted package directory.
Approve Windows prompt
If Windows SmartScreen appears, select More info and then Run anyway.
Sign in and accept terms
Create an account or sign in, then accept the EULA to complete first launch.
Open the model configuration panel
After installation, open the model configuration area inside the application to select an inference source, configure model settings, and verify the active provider.
Local Ollama setup
- Set Inference source to Local Ollama, or leave it on Auto with no API key configured.
- In Model, enter your local model name, for example deepseek-r1:latest.
- In Local Ollama, confirm the URL. The default is http://127.0.0.1:11434.
- Click Refresh Ollama Models.
- Click Save Model Settings.
Cloud provider setup
- Set Inference source to Cloud API, or use Auto.
- Select the Cloud provider.
- In Model, enter the exact provider model ID.
- Paste your key into API key.
- Confirm the correct API base URL.
- Click Save Model Settings.
- Click Test Active Provider.
Supported cloud connection patterns
When using a cloud provider, the API base URL must match the provider’s expected endpoint format. OpenAI-compatible providers must expose an OpenAI-style API surface.
How Aura chooses the active provider
Provider selection follows a fixed runtime rule. If Inference source is set to Cloud API, Aura uses cloud only. If it is set to Local Ollama, Aura uses local only. If it is set to Auto, Aura uses cloud when both an API key and model are present; otherwise it uses local Ollama.
The bottom-left status bar inside the application shows the active provider and model, for example Ollama: deepseek-r1:latest or OpenAI: gpt-4o.
Logs, troubleshooting, and support information
Export error logs for support
If the demo encounters an issue, open Preferences, go to Privacy, and use Export Logs under the error log export area. Save the JSON file and include it with the support report.
Runtime logs are also written locally to .../AppData/Roaming/AuraCoreCF/logs/aura-errors.jsonl in installed builds. Electron userData location may vary by packaging mode.
Common issues
- Cannot reach Ollama. Start Ollama and verify http://127.0.0.1:11434 is reachable.
- Cloud test failed (401/403). The API key is invalid or lacks permission.
- Cloud test failed (404). The API base URL is incorrect for that provider.
- Model not found. Verify the exact model ID spelling in Model.
- Still using wrong provider. Re-open AI Model, confirm Inference source, click Save Model Settings, then test again.