Sovereign AI Platform

Build compliant AI. On Danish soil.

Get your own private AI environment on Denmark's most powerful supercomputer. 27 LLMs, orchestration, compliance — the entire stack ready from day one.

1,528 H100 GPUs 27 LLMs 0 data egress 100% DK

The foundation

Sovereign AI. No compromises.

You get sovereignty, performance and compliance — at the same time. One subscription, the whole stack.

Value 01

Data stays in Denmark

Full stack on Danish soil — from GPU to application.

  • Air-gapped DK data centre.
  • CLOUD Act does not apply.
  • Full audit trail + ISAE 3000 ready.

Value 02

The platform, ready

Orchestration, models, compliance — everything out of the box.

  • 27+ LLMs + dynamic routing.
  • 14 days from signing to production.
  • No CAPEX — subscription only.

Value 03

Your AI. Your control.

Multi-model. Custom deployment. Exit-ready.

  • Switch freely between 27+ open-source LLMs.
  • Deploy your own fine-tuned models.
  • Zero-training policy. Your data never trains a model.

The foundation

Seven layers. One stack. Sovereign by design.

From GPU to application. We provide the foundation. You build the value.

Applications you build on top of the stack — clinical tools, research platforms, analytics dashboards.

You get

  • Multi-tenant API for your apps
  • Authentication and role-based access
  • SDKs for Python, Node, Go
  • Observability: latency, tokens, errors per app
Everyone ships some apps — none the foundation beneath.

Dynamic model selection, queue management and load balancing. You send one prompt — we pick the right model and GPU.

You get

  • Routing engine with rules per task type
  • Load balancer across GPU nodes
  • Model selector based on latency requirements
  • Workload manager with priority queues
  • Fallback strategies on model failure
AWS/Azure don't include this. You build it yourself.

27+ open-source LLMs integrated and GPU-optimised. Deploy your own fine-tuned models alongside.

You get

  • Llama 3 (8B, 70B), Mistral (7B, 8x22B), Mixtral 8x7B
  • Gemma 2, Qwen 2.5, Phi-3, DeepSeek (opt-in)
  • Custom model deployment via vLLM
  • Zero-training policy: your data never leaves the stack
OpenAI = 1 model. Azure = 1 vendor. Bedrock = vendor lock-in.

Vector databases for RAG, object storage for documents, and an immutable audit ledger for every prompt.

You get

  • Milvus or pgvector — you choose
  • S3-compatible object storage (MinIO)
  • SHA-256 signed audit ledger per call
  • K-anonymity on all logs
No audit trail in public cloud.

All runtime infrastructure pre-installed and maintained. No yum, apt or driver hell.

You get

  • Kubernetes with GPU-aware scheduler
  • NVIDIA CUDA + CuDNN in compatible versions
  • vLLM for high-throughput inference
  • PyTorch + TensorFlow ready to use
Otherwise you install and run it all.

Dedicated compute on NVIDIA DGX H100 nodes. InfiniBand between nodes, no overbooking.

You get

  • Dedicated H100 GPUs (not shared)
  • InfiniBand 400 Gb/s between nodes
  • GPU scheduling per partition
  • Multi-node training support
Public cloud shares GPUs. No dedicated allocation.

Physical hardware in a Danish data centre. Air-gapped from public internet. Green power.

You get

  • 191 NVIDIA DGX H100 nodes on GEFION
  • 1,528 H100 Tensor Core GPUs total
  • Air-gapped — no public internet
  • Renewable energy (green power)
  • ISAE 3000-audited data centre
AWS/Azure have nothing in DK with these specs.

Vs. the competition

The choice is simple.

Sovereignty, security, performance, vendor freedom — at once.

Criterion People's Lab AWS Bedrock Azure OpenAI Claude API GEFION direct
EU / DK jurisdiction
Multi-model support
Data residency guaranteed
Custom model deployment
Zero-training policy
Audit trail built-in
Private deployment
Danish hosting
Full-stack (not just compute)
No vendor lock-in

Audiences

Built for those who cannot compromise.

Consultancies

Consultancies and digital advisors

Pains

  • You build solutions you don't want to own infrastructure for
  • Clients require EU sovereignty
  • Vendor lock-in with AWS/Azure hurts your flexibility

Gains

  • Whitelabel partition for clients
  • Focus on the value layer: advisory and integration
  • Premium positioning as the sovereign alternative

A consultancy runs five client solutions on the same partition. Each client has its own logical tenancy. The consultancy owns the relationship; we deliver the stack.

Regulated

Banking, insurance, pharma, defence, public sector

Pains

  • Cannot use public cloud for regulated data
  • Own data centre costs €7M+ CAPEX
  • No compliance-ready AI platform in EU

Gains

  • DK hosting with audit trail from day one
  • ISAE 3000-ready from the start
  • Skip 12 months of implementation

A Danish bank runs credit-risk models on a private partition. Data never leaves DK. EU AI Act compliance documented in the audit ledger.

Vertical SaaS

EHR vendors, banking software, gov-tech

Pains

  • Customers demand sovereignty; you don't want to run compute
  • OpenAI API drains margins
  • Vendor lock-in hurts your exit

Gains

  • Embed multi-LLM inside your product
  • Stable pricing, predictable margins
  • Multi-model freedom without lock-in

An EHR vendor embeds an AI journal assistant in the product. Their clinic customers see People's Lab as the hosting partner — compliance is resolved.

Multi-model

One partition. 27 models. Zero lock-in.

Each task gets the right model. We route. You build.

The right model per task. No manual selection.

Classification, summarisation, code generation and long RAG queries all have different optimums. A 7B model for classification, a 70B for reasoning, a custom fine-tune for your domain.

Our routing engine reads the task type and picks the model. You write one prompt. We orchestrate compute, model selection and failover.

Partitions

Pick your partition.

All tiers include full access to the multi-LLM stack. The differentiator is compute capacity and isolation.

Shared

From €2,000/mo

For experiments and PoC

  • Shared GPU capacity
  • Access to all 27 LLMs
  • 10M tokens included
  • Audit trail and DK hosting
  • Community support

Private Cloud

From €33,500/mo

For regulated enterprise

  • Dedicated multi-GPU partition
  • Isolated network and VPN
  • ISAE 3000 audit report included
  • 99.9% SLA
  • Named onboarding engineer
  • Custom DPA and compliance review

Enterprise

Custom

Scalable, multi-node

  • Multi-node DGX allocation
  • Multi-region possible
  • 24/7 on-call support
  • Custom SLA
  • Dedicated solutions team
  • Air-gapped on-prem deployment possible
+N

All prices are indicative placeholders. 12-month contract. Upgrade anytime. Contact us for final pricing.

Configure add-ons

Onboarding

From signature to production in two weeks.

1

Sign

DPA and pick tier online.

Same day
2

Partition provisioned

We allocate your compute.

24 hours
3

Credentials

API key and dashboard login via email.

Same day
4

First API call

Try 27 LLMs via our Postman collection.

Same day
5

Production

Dedicated onboarding support.

2 weeks

Security

Data stays in Denmark.

Your prompt from your app VPN TUNNEL DK PERIMETER — AIR-GAPPED GEFION H100 compute LLM routed model Response audit-signed VPN TUNNEL — return
Outside the perimeter: USA Ireland Frankfurt Public internet — no outbound connections.
GDPR-compliantDocumented in every audit line
ISAE 3000-auditedIndependent third-party assurance
EU AI Act 2027-readyAudit trail and model governance in place
Air-gappedNo connection to public internet

Questions

Typical from technical decision-makers.

Shared runs on pooled GPU capacity with fair scheduling — good for PoC and development. Dedicated gives you guaranteed H100 allocation with 99.5% SLA and predictable latency — for production with business-critical workloads.

Yes. We host weights in Hugging Face, GGUF and safetensors formats, served via vLLM. Custom model deployment is included in Dedicated and above. We validate compatibility before go-live.

Pure OPEX. Monthly subscription to your partition, 12-month contract. No hardware investment. Token usage above the included volume is billed as overage — transparent and per model.

All of it. Prompts, responses, embeddings, vector indexes, audit logs and model weights sit on GEFION in Denmark. Nothing leaves the perimeter. Air-gapped from the public internet.

Yes. The Shared tier has a free trial with 10M tokens included. You get an API key on the same day and can run against all 27 LLMs. PoC runs on Shared and migrates to Dedicated without code changes.

Our DPA is baseline GDPR with data-processor roles explicitly defined. The ISAE 3000 Type II report is provided to Private Cloud and Enterprise — it covers access controls, logging, change management and sub-processors.

Your team via API keys. Our SRE team has break-glass access — documented in the audit ledger on every use. No third parties. No US parent. Zero-training policy: your prompts never become training data.

Yes — the Enterprise tier includes on-prem deployment. We deliver the same stack as an appliance or air-gapped deployment in your data centre. Defence and public-sector classified workloads typically run here.

When you're ready

Build healthcare's, banking's or defence's AI — without moving data out of Denmark.