GPU Provider Node Setup
The Neuronix Provider Node lets you contribute your GPU to the network and earn money for every task processed. It runs quietly in the background and auto-selects the best AI model for your hardware.
System Requirements
Install via CLI
The fastest way to get started. One command. Requires Node.js 18+ installed on your system.
1. Run the node (no install needed)
This downloads and runs the Neuronix node automatically. No cloning, no setup.
2. Login with your Neuronix account
The node will prompt you for your email and password on first run. Create an account at neuronix-nu.vercel.app/signupif you haven't already.
3. Done — you're earning
The node auto-detects your hardware, downloads the best AI model for your GPU, and connects to the network. Leave it running and you'll earn for every task processed.
Alternative: Global install
If you want to run it without npx every time:
Skip the login prompt (optional)
Set environment variables to authenticate automatically:
Desktop App (GUI)
Prefer a graphical interface? The Neuronix Desktop App gives you a one-click start/stop experience with a live dashboard. Coming soon as a downloadable installer.
Coming soon. The desktop app with a graphical interface is currently in development. For now, use the CLI method above — it takes one command and works on Windows, macOS, and Linux.
When available, the desktop app will feature a one-click start/stop button, live earnings dashboard, hardware info display, and system tray support for running in the background.
Configuration
The node stores its configuration at ~/.neuronix/config.json. You can edit this file directly or use environment variables.
Supported Models
The node automatically selects the best model your hardware can run. Models are downloaded on first use and cached locally. All models are open-source and free.
Models are stored in ~/.neuronix/models/. To free disk space, delete model files from that directory. They will be re-downloaded when needed.
Troubleshooting
"Registration failed" on startup
Check your internet connection. If the problem persists, verify you have an account at neuronix-nu.vercel.app/signup and that your email/password are correct.
"Model setup failed" error
The model download may have been interrupted. Delete the partial file from ~/.neuronix/models/ and restart the node. It will re-download automatically.
Node shows 'online' but no tasks are coming in
This is normal when the network has more providers than tasks. Tasks are dispatched based on priority and VRAM — higher-end GPUs get tasks first. Leave the node running and tasks will come.
High CPU/GPU usage
This only happens during active task processing (a few seconds at a time). Between tasks, the node is idle. If usage stays high, check that no other process is using your GPU.
How do I update the node?
The node checks for updates on startup. To update manually, run 'npx neuronix-node@latest' or if globally installed, 'npm update -g neuronix-node'. Your config and models are preserved.
Can I run multiple nodes on one machine?
Not recommended — each node binds to the same GPU. Run one node per machine for best results.
Getting Started
The Neuronix API is available at https://api.neuronix.io/v1. All endpoints use HTTPS and return JSON. Authentication is done with API keys passed in the request header.
To get an API key, contact contact@neuronix.io or join the waitlist. Keys are currently in limited access beta for qualified buyers.
API Keys
Pass your API key as a Bearer token in the Authorization header on every request. Keys are prefixed with az_.
POST /inference
Submit a task to the Neuronix network. The request is routed to an optimal provider and the result is returned once processing is complete. For long tasks, use the async mode and poll with the returned task ID.
Request Body
Response Schema
Code Examples
Rate Limits
Rate limits are applied per API key. Headers on every response show your current usage.
Error Reference
All errors return a JSON body with a code and message field.