AI on your terms
Building language models from scratch on consumer hardware. Pre-training, research, and open-source AI tools.
Genesis: Pre-Training from Scratch
Building in the open on consumer hardware.
What We Do
From pre-training models from scratch to deploying them in production.
Pre-Training from Scratch
Training language models from scratch on consumer GPUs. Custom architecture, tokenizer, and 60B token multilingual dataset - no datacenter required.
Model Fine-Tuning
Custom fine-tuning of open-weight models for your specific use case, brand voice, and domain expertise.
Safety Evaluation
Comprehensive model evaluation aligned with frontier lab methodologies. Red team testing and benchmark analysis.
Application Development
Custom web and mobile applications with AI integration. React, TypeScript, Node.js, and native iOS.
Managed IT Services
Full-stack infrastructure management including cloud hosting, Odoo deployment, DNS & email management, and ongoing IT support. GDPR compliant with EU-based hosting.
AI Agents & Orchestration
Design and deployment of AI agent systems that work alongside your team. Multi-model orchestration, local deployment, and custom integrations.
Featured Projects
From pre-training language models to privacy-respecting AI tools.

Latest from the Blog
Technical deep-dives from the Genesis training run.
Genesis 1B, Run 2: 3x Throughput, Same Hardware
Redesigning Genesis 1B from 20 to 32 layers. Same param count, same GPUs, 3x training throughput.
Run 1 + 2Genesis 1B: Training Progress
Model specs, dataset, and training infrastructure across both runs. Includes a live HuggingFace playground.
Run 1The Optimizer State Bug: A Silent Failure in DCP Resume
A silent AdamW state bug during Run 1 that produced a false recovery on poisoned weights.
Run 1Fixing FSDP Checkpoint Deadlocks on 2x RTX 4090
How DCP sharded checkpoints and CPU-offload resume fixed deadlocks on consumer GPUs without NVLink.
The Genesis Manifesto
Data sovereignty, constitutional alignment, and why the future of AI is local, private, and personality-first.
Mapping the Mind of Qwen 3.5 9B
A sparse autoencoder for mechanistic interpretability: zero dead features, 16,384 dimensions.
Let's build together
Tell us about your project and we'll get back to you within 24 hours.
Or reach out directly