Neuronix

API & Node
Documentation

Everything you need to integrate with the Neuronix API or set up a GPU provider node.


GPU Provider Node Setup

The Neuronix Provider Node lets you contribute your GPU to the network and earn money for every task processed. It runs quietly in the background and auto-selects the best AI model for your hardware.

System Requirements

ComponentMinimumRecommended
OSWindows 10, macOS 12, Ubuntu 20.04Latest stable release
RAM8 GB16 GB+
Storage5 GB free (SSD preferred)20 GB+ SSD
Internet10 Mbps down / 5 Mbps up50 Mbps+
GPUNone (CPU mode available)NVIDIA GTX 1060+ or AMD RX 580+
VRAMN/A for CPU mode6 GB+ for best models
Node.jsv18+v20 LTS
No GPU? No problem. The node runs in CPU mode using TinyLlama 1.1B. You earn less per task but can still contribute to the network with any modern computer.

Install via CLI

The fastest way to get started. One command. Requires Node.js 18+ installed on your system.

1. Run the node (no install needed)

npx neuronix-node

This downloads and runs the Neuronix node automatically. No cloning, no setup.

2. Login with your Neuronix account

The node will prompt you for your email and password on first run. Create an account at neuronix-nu.vercel.app/signupif you haven't already.

3. Done — you're earning

The node auto-detects your hardware, downloads the best AI model for your GPU, and connects to the network. Leave it running and you'll earn for every task processed.

Alternative: Global install

If you want to run it without npx every time:

npm install -g neuronix-node
neuronix-node

Skip the login prompt (optional)

Set environment variables to authenticate automatically:

Environment Variables
export NEURONIX_EMAIL="you@company.com"
export NEURONIX_PASSWORD="your-password"
npx neuronix-node

Desktop App (GUI)

Prefer a graphical interface? The Neuronix Desktop App gives you a one-click start/stop experience with a live dashboard. Coming soon as a downloadable installer.

Coming soon. The desktop app with a graphical interface is currently in development. For now, use the CLI method above — it takes one command and works on Windows, macOS, and Linux.

When available, the desktop app will feature a one-click start/stop button, live earnings dashboard, hardware info display, and system tray support for running in the background.

Configuration

The node stores its configuration at ~/.neuronix/config.json. You can edit this file directly or use environment variables.

~/.neuronix/config.json
{
  "nodeKey": "node_a1b2c3d4e5f6g7h8",
  "nodeId": "uuid-assigned-by-server",
  "userId": "your-supabase-user-id",
  "authToken": "your-session-token",
  "apiUrl": "https://neuronix-nu.vercel.app",
  "modelsDir": "~/.neuronix/models",
  "pollIntervalMs": 5000,
  "heartbeatIntervalMs": 30000
}
FieldDefaultDescription
nodeKeyAuto-generatedUnique identifier for this node. Do not change.
apiUrlhttps://neuronix-nu.vercel.appAPI endpoint. Only change for self-hosted instances.
modelsDir~/.neuronix/modelsWhere AI models are downloaded and cached.
pollIntervalMs5000How often to check for new tasks (in ms).
heartbeatIntervalMs30000How often to send a heartbeat to the network (in ms).

Supported Models

The node automatically selects the best model your hardware can run. Models are downloaded on first use and cached locally. All models are open-source and free.

ModelSizeMin VRAMMax TokensBest For
TinyLlama 1.1B669 MBCPU OK2,048Basic tasks, low-end hardware
Phi-2 2.7B1.7 GB2 GB2,048Reasoning, code, mid-range GPUs
Mistral 7B4.4 GB6 GB8,192General purpose, high quality
Llama 3 8B4.9 GB8 GB8,192Best quality, high-end GPUs

Models are stored in ~/.neuronix/models/. To free disk space, delete model files from that directory. They will be re-downloaded when needed.

Troubleshooting

"Registration failed" on startup

Check your internet connection. If the problem persists, verify you have an account at neuronix-nu.vercel.app/signup and that your email/password are correct.

"Model setup failed" error

The model download may have been interrupted. Delete the partial file from ~/.neuronix/models/ and restart the node. It will re-download automatically.

Node shows 'online' but no tasks are coming in

This is normal when the network has more providers than tasks. Tasks are dispatched based on priority and VRAM — higher-end GPUs get tasks first. Leave the node running and tasks will come.

High CPU/GPU usage

This only happens during active task processing (a few seconds at a time). Between tasks, the node is idle. If usage stays high, check that no other process is using your GPU.

How do I update the node?

The node checks for updates on startup. To update manually, run 'npx neuronix-node@latest' or if globally installed, 'npm update -g neuronix-node'. Your config and models are preserved.

Can I run multiple nodes on one machine?

Not recommended — each node binds to the same GPU. Run one node per machine for best results.


Getting Started

The Neuronix API is available at https://api.neuronix.io/v1. All endpoints use HTTPS and return JSON. Authentication is done with API keys passed in the request header.

To get an API key, contact contact@neuronix.io or join the waitlist. Keys are currently in limited access beta for qualified buyers.

Base URL
https://api.neuronix.io/v1
Auth Method
Bearer Token (API Key)
Content Type
application/json
API Version
v1 (stable)

API Keys

Pass your API key as a Bearer token in the Authorization header on every request. Keys are prefixed with az_.

Authorization Header
Authorization: Bearer az_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Keep your key private. Never expose it in client-side code or commit it to source control. Rotate keys immediately if compromised via the dashboard.

POST /inference

Submit a task to the Neuronix network. The request is routed to an optimal provider and the result is returned once processing is complete. For long tasks, use the async mode and poll with the returned task ID.

POSThttps://api.neuronix.io/v1/inference

Request Body

FieldTypeRequiredDescription
modelstringyesModel identifier. Example: "llama3-8b", "mistral-7b", "phi-3"
promptstringyesThe input prompt or instruction for the model.
systemstringnoOptional system prompt to set model behavior and context.
max_tokensintegernoMaximum tokens to generate. Defaults to 1024. Max: 8192.
temperaturefloatnoSampling temperature (0.0 to 2.0). Defaults to 0.7.
streambooleannoIf true, returns a streaming SSE response. Defaults to false.
asyncbooleannoIf true, returns immediately with a task_id for polling. Defaults to false.

Response Schema

200 OK - application/json
{
  "task_id": "tsk_7f3a2b9e4c1d",
  "status": "complete",
  "result": {
    "text": "The generated response text appears here...",
    "finish_reason": "stop",
    "usage": {
      "prompt_tokens": 48,
      "completion_tokens": 312,
      "total_tokens": 360
    }
  },
  "provider": {
    "node_id": "NODE-4a8f3c",
    "region": "us-west",
    "hardware": "RTX 4090"
  },
  "latency_ms": 2840,
  "cost_usd": 0.0028,
  "created_at": "2026-03-27T14:32:01Z"
}

Code Examples

Python
import requests

API_KEY = "az_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
BASE_URL = "https://api.neuronix.io/v1"

def run_inference(prompt: str, model: str = "llama3-8b") -> dict:
    response = requests.post(
        f"{BASE_URL}/inference",
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json",
        },
        json={
            "model": model,
            "prompt": prompt,
            "max_tokens": 1024,
            "temperature": 0.7,
        },
    )
    response.raise_for_status()
    return response.json()

result = run_inference("Summarize the risks of a Class B office investment.")
print(result["result"]["text"])
print(f"Cost: ${result['cost_usd']:.6f}")
print(f"Tokens: {result['result']['usage']['total_tokens']}")
JavaScript / TypeScript
const API_KEY = "az_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
const BASE_URL = "https://api.neuronix.io/v1";

async function runInference(prompt: string, model = "llama3-8b") {
  const res = await fetch(`${BASE_URL}/inference`, {
    method: "POST",
    headers: {
      Authorization: `Bearer ${API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      model,
      prompt,
      max_tokens: 1024,
      temperature: 0.7,
    }),
  });

  if (!res.ok) {
    throw new Error(`Neuronix API error: ${res.status}`);
  }

  return res.json();
}

const result = await runInference(
  "Write a property listing for a 4-bedroom colonial in Westport, CT"
);

console.log(result.result.text);
console.log(`Cost: $${result.cost_usd}`);
console.log(`Tokens: ${result.result.usage.total_tokens}`);
curl
curl -X POST https://api.neuronix.io/v1/inference \
  -H "Authorization: Bearer az_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3-8b",
    "prompt": "Analyze Q1 2026 multifamily market trends in Phoenix, AZ",
    "max_tokens": 1024,
    "temperature": 0.7
  }'
Streaming Response (curl)
# Add "stream": true to receive Server-Sent Events
curl -X POST https://api.neuronix.io/v1/inference \
  -H "Authorization: Bearer az_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  --no-buffer \
  -d '{
    "model": "llama3-8b",
    "prompt": "Write a detailed market analysis for Austin, TX",
    "max_tokens": 2048,
    "stream": true
  }'

# Each SSE event:
# data: {"delta": "text chunk", "finish_reason": null}
# data: {"delta": "", "finish_reason": "stop", "usage": {...}}

Rate Limits

Rate limits are applied per API key. Headers on every response show your current usage.

TierRequests / minTokens / dayConcurrent
Beta60500,0005
Standard3005,000,00020
EnterpriseUnlimitedUnlimitedCustom
Rate Limit Response Headers
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 54
X-RateLimit-Reset: 1743089460
X-Tokens-Used-Today: 48200
X-Tokens-Limit-Today: 500000

Error Reference

All errors return a JSON body with a code and message field.

StatusCodeDescription
400invalid_requestMalformed request body or missing required fields.
401unauthorizedMissing or invalid API key.
429rate_limit_exceededToo many requests. Check rate limit headers.
503no_provider_availableNo suitable provider is currently online for this request.
500internal_errorUnexpected server error. Contact support if it persists.
Error Response Example
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "You have exceeded your rate limit of 60 requests per minute.",
    "retry_after": 14
  }
}