1. Directory Layout
Create a directory called ota-3d-node/
with the following structure:
graphqlota-3d-node/
├── app.py # FastAPI server
├── models/ # Stores uploaded STL/G-code
├── queue.db # SQLite print job queue
├── requirements.txt # Python dependencies
└── sync_jobs.py # Optional online sync tool
✅ 2. requirements.txt
txtfastapi
uvicorn
python-multipart
sqlalchemy
requests
✅ 3. app.py
(local server)
This mimics api.otawallet.com
endpoints:
pythonfrom fastapi import FastAPI, UploadFile, Form
from fastapi.responses import JSONResponse
from pydantic import BaseModel
import os, shutil, uuid, datetime, sqlite3
app = FastAPI()
os.makedirs("models", exist_ok=True)
conn = sqlite3.connect("queue.db", check_same_thread=False)
conn.execute('''CREATE TABLE IF NOT EXISTS jobs
(id TEXT, modelCid TEXT, gcodeCid TEXT, createdAt TEXT)''')
class PrintRequest(BaseModel):
modelCid: str
gcodeCid: str
settings: dict
payment: dict
@app.post("/models/upload")
async def upload(file: UploadFile, name: str = Form(...), description: str = Form(...), tags: str = Form(...)):
filename = f"models/{uuid.uuid4()}_{file.filename}"
with open(filename, "wb") as f:
shutil.copyfileobj(file.file, f)
fake_cid = f"Qm{uuid.uuid4().hex[:44]}"
return {"cid": fake_cid, "name": name, "description": description, "tags": tags.split(","), "owner": "local"}
@app.post("/print")
def submit_print(req: PrintRequest):
job_id = str(uuid.uuid4())
created = datetime.datetime.now().isoformat()
conn.execute("INSERT INTO jobs VALUES (?, ?, ?, ?)",
(job_id, req.modelCid, req.gcodeCid, created))
conn.commit()
return {"jobId": job_id}
@app.get("/status/{job_id}")
def status(job_id: str):
cur = conn.cursor()
cur.execute("SELECT * FROM jobs WHERE id = ?", (job_id,))
row = cur.fetchone()
if row:
return {"jobId": job_id, "status": "queued", "createdAt": row[3]}
return JSONResponse(status_code=404, content={"error": "Not found"})
✅ 4. Run it locally
bashcd ota-3d-node/
pip install -r requirements.txt
uvicorn app:app --host 0.0.0.0 --port 8000
Now it’s live at http://localhost:8000
.
✅ 5. Use with Your Python Client
Your original Python client will auto-switch to this when offline:
pythonAPI = "http://localhost:800
0
Core Features
- Offline-ready Flask API (
ota_3d_api.py
)/models/upload
for STL/G-code/print
for job queuing/status/:jobId
- Auto-pinning to IPFS (local + optional remote sync)
- Python SDK (
ota3d.py
) with:- Upload model
- Submit print
- Query job status
- Node.js SDK (
bridge.js
) with:- Cura slicer → upload → print flow
- OpenAPI spec (
openapi.yaml
)- Fully documented
- Ready to import into Swagger, Postman, etc.
- Runs CuraEngine to slice an STL into G-code
- Pins both STL and G-code to IPFS
- Calls API to register the model and submit a print job
Feel free to adapt to Python, Go, or whatever you prefer—this is just one working example.
1. OpenAPI 3.0 Spec (Sample)
yamlopenapi: 3.0.1
info:
title: OTA 3D dCloud API
version: 1.0.0
paths:
/models/upload:
post:
summary: Upload & pin a new STL model
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
file:
type: string
format: binary
name:
type: string
description:
type: string
tags:
type: array
items:
type: string
responses:
'201':
description: Model registered
content:
application/json:
schema:
$ref: '#/components/schemas/Model'
/print:
post:
summary: Submit a print job
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
modelCid:
type: string
gcodeCid:
type: string
settings:
type: object
payment:
$ref: '#/components/schemas/Payment'
responses:
'200':
description: Job queued
content:
application/json:
schema:
type: object
properties:
jobId:
type: string
components:
schemas:
Model:
type: object
properties:
cid:
type: string
name:
type: string
description:
type: string
tags:
type: array
items:
type: string
owner:
type: string
createdAt:
type: string
format: date-time
Payment:
type: object
properties:
token:
type: string
amount:
type: number
2. Node.js Bridge Sample
js// bridge.js
import { execSync } from 'child_process';
import fs from 'fs';
import path from 'path';
import IPFS from 'ipfs-http-client';
import axios from 'axios';
const ipfs = IPFS.create({ url: 'http://localhost:5001' });
const API_BASE = 'https://api.yourdomain.com';
// 1. Slice STL → G-code via CuraEngine
function sliceWithCura(stlPath, profileJson, outputGcode) {
execSync([
'curaengine slice',
`-j ${profileJson}`,
`-l ${stlPath}`,
`-o ${outputGcode}`
].join(' '), { stdio: 'inherit' });
}
// 2. Pin file to IPFS, return CID
async function pinToIpfs(filePath) {
const data = fs.readFileSync(filePath);
const { cid } = await ipfs.add({ content: data });
// optionally pin:
await ipfs.pin.add(cid);
return cid.toString();
}
// 3. Register model in dCloud API
async function registerModel(cid, name, description, tags) {
const resp = await axios.post(`${API_BASE}/models/upload`, {
cid, name, description, tags
});
return resp.data;
}
// 4. Submit print job
async function submitPrint(modelCid, gcodeCid, settings, payment) {
const resp = await axios.post(`${API_BASE}/print`, {
modelCid, gcodeCid, settings, payment
});
return resp.data.jobId;
}
// Full workflow
async function main() {
const stl = 'input.stl';
const profile = 'your-profile.json';
const gcode = 'output.gcode';
// slice
sliceWithCura(stl, profile, gcode);
// pin STL & G-code
const modelCid = await pinToIpfs(stl);
const gcodeCid = await pinToIpfs(gcode);
// register model metadata
await registerModel(modelCid,
path.basename(stl),
'My custom bracket',
['bracket','mount']
);
// submit print (example payment)
const jobId = await submitPrint(
modelCid,
gcodeCid,
{ temperature: 200, speed: 60 },
{ token: 'OTA', amount: 5 }
);
console.log('Print job queued with ID:', jobId);
}
main().catch(console.error);
1. Install & run IPFS locally
- Install bash
# Debian/Ubuntu wget https://dist.ipfs.io/go-ipfs/v0.18.1/go-ipfs_v0.18.1_linux-amd64.tar.gz tar xzf go-ipfs_v0.18.1_linux-amd64.tar.gz cd go-ipfs && sudo bash install.sh
- Initialize & start daemon bash
ipfs init ipfs daemon --offline # runs without trying to connect to the public DHT
By default this listens on127.0.0.1:5001
(the HTTP API) and127.0.0.1:4001
(the swarm).
2. Run your API server locally
Package your Node/FastAPI dCloud node into a Docker or plain-node process and run it on localhost
. For example, if your server listens on port 3000:
bash# in ota-3d-node/ directory
npm install
npm run start # or `uvicorn app.main:app --port 3000`
3. Point your bridge code at the local endpoints
jsimport IPFS from 'ipfs-http-client';
import axios from 'axios';
// 1. Talk to your local IPFS daemon
const ipfs = IPFS.create({ url: 'http://127.0.0.1:5001' });
// 2. Talk to your local API server instead of the cloud URL
const API_BASE = 'http://127.0.0.1:3000'; // ← changed from https://api.yourdomain.com
// ...rest of your bridge code remains the same...
4. Offline considerations
- IPFS offline mode (
ipfs daemon --offline
) lets you add & pin files locally; they won’t be discoverable by other peers until you re-enable networking. - Local LAN sharing: if you have multiple machines on a private network, let them connect via the swarm port (4001) so they exchange blocks in a truly mesh.
- API synchronization: when your nodes come back online to the wider Internet, you can point their IPFS configs at your cluster or pinning-service peers to propagate models kubing.
With that setup, the “bridge” script will work perfectly—adding/pinning the .stl
and G-code to your local IPFS, and calling your local API, all without any external Internet dependence.
Add an Offline Queue
Store the job info locally (e.g., in a file or SQLite) when there’s no internet.
2. Switch to Localhost When Offline
Detect network status and fall back to localhost
or a bundled FastAPI server:
pythonimport os, requests
def is_online():
try:
requests.get("https://api.otawallet.com/ping", timeout=3)
return True
except:
return False
API = "https://api.otawallet.com" if is_online() else "http://localhost:8000"
3. Bundle a Portable FastAPI Server
Ship a lightweight version of the API running on localhost:8000
(e.g., with FastAPI + SQLite). It:
- Saves uploaded STL and G-code to disk.
- Queues print jobs locally.
- Returns a simulated
jobId
.
Later, when online:
- Jobs are synced to
api.otawallet.com
. - Print results/status are uploaded to IPFS.
🧱 Bonus Offline Enhancements
- Auto-sign
modelCid
andgcodeCid
with device MAC key. - Auto-publish IPFS CIDs using CLI (
ipfs add
fallback). - Bundle local G-code viewer or slicer (e.g.,
curaengine
,prusa-slicer
).