Skip to main content

Microsoft Azure

This guide deploys Contract Lucidity on Azure using managed services. Azure Container Apps provides serverless container hosting with built-in scaling, making it an excellent fit for enterprise deployments -- especially organizations already invested in the Microsoft ecosystem.

Architecture

Prerequisites

  • Azure subscription with Contributor access
  • Azure CLI v2.60+ installed
  • Docker installed locally
  • A registered domain name
# Login and set subscription
az login
az account set --subscription "<subscription-id>"

# Register required providers
az provider register --namespace Microsoft.App
az provider register --namespace Microsoft.OperationalInsights

Step 1: Resource Group

az group create \
--name cl-production \
--location eastus2

Step 2: Azure Database for PostgreSQL Flexible Server

# Create the server
az postgres flexible-server create \
--resource-group cl-production \
--name cl-postgres \
--location eastus2 \
--sku-name Standard_B2s \
--tier Burstable \
--version 16 \
--storage-size 64 \
--admin-user cl_user \
--admin-password '<strong-password>' \
--yes

# Allow Azure services to connect
az postgres flexible-server firewall-rule create \
--resource-group cl-production \
--name cl-postgres \
--rule-name AllowAzureServices \
--start-ip-address 0.0.0.0 \
--end-ip-address 0.0.0.0

# Create the database
az postgres flexible-server db create \
--resource-group cl-production \
--server-name cl-postgres \
--database-name contract_lucidity

# Enable pgvector extension (allowlist it first)
az postgres flexible-server parameter set \
--resource-group cl-production \
--server-name cl-postgres \
--name azure.extensions \
--value vector

# Then connect and enable it
psql "host=cl-postgres.postgres.database.azure.com \
dbname=contract_lucidity \
user=cl_user \
password=<password> \
sslmode=require" \
-c "CREATE EXTENSION IF NOT EXISTS vector;"
pgvector on Azure

Azure Database for PostgreSQL Flexible Server supports pgvector natively (v0.7.0+ as of 2025). You must first add vector to the azure.extensions server parameter allowlist before running CREATE EXTENSION. Azure also supports DiskANN for scalable approximate nearest neighbor search on Flexible Server.

Step 3: Azure Cache for Redis

az redis create \
--resource-group cl-production \
--name cl-redis \
--location eastus2 \
--sku Standard \
--vm-size C1 \
--enable-non-ssl-port false \
--minimum-tls-version 1.2

# Get the connection details
az redis show --resource-group cl-production --name cl-redis \
--query '{hostname:hostName, port:sslPort}' -o table

az redis list-keys --resource-group cl-production --name cl-redis \
--query primaryKey -o tsv
Azure Managed Redis Migration

Microsoft has announced Azure Cache for Redis will retire on September 30, 2028, with the Enterprise tier retiring March 30, 2027 (as of March 2026). For new deployments, consider Azure Managed Redis for improved performance and long-term support. The connection string format is compatible.

Step 4: Azure Files for Document Storage

# Create storage account
az storage account create \
--resource-group cl-production \
--name clstorageaccount \
--location eastus2 \
--sku Premium_LRS \
--kind FileStorage

# Create file share
az storage share-rm create \
--resource-group cl-production \
--storage-account clstorageaccount \
--name cl-documents \
--quota 100 \
--enabled-protocols SMB

# Get the storage key
az storage account keys list \
--resource-group cl-production \
--account-name clstorageaccount \
--query '[0].value' -o tsv

Step 5: Azure Container Registry

# Create registry
az acr create \
--resource-group cl-production \
--name clregistry \
--sku Basic \
--admin-enabled true

# Login
az acr login --name clregistry

# Build and push images
cd contract-lucidity

# Backend
docker build -t clregistry.azurecr.io/cl-backend:latest ./backend -f ./backend/Dockerfile
docker push clregistry.azurecr.io/cl-backend:latest

# Worker
docker build -t clregistry.azurecr.io/cl-worker:latest ./backend -f ./backend/Dockerfile.worker
docker push clregistry.azurecr.io/cl-worker:latest

# Frontend
docker build -t clregistry.azurecr.io/cl-frontend:latest ./frontend -f ./frontend/Dockerfile
docker push clregistry.azurecr.io/cl-frontend:latest

Step 6: Key Vault for Secrets

az keyvault create \
--resource-group cl-production \
--name cl-keyvault \
--location eastus2

# Store secrets
az keyvault secret set --vault-name cl-keyvault --name postgres-password --value '<strong-password>'
az keyvault secret set --vault-name cl-keyvault --name jwt-secret --value '<generated-secret>'
az keyvault secret set --vault-name cl-keyvault --name redis-key --value '<redis-primary-key>'
az keyvault secret set --vault-name cl-keyvault --name admin-password --value '<strong-password>'

Step 7: Azure Container Apps

Create the Environment

# Create Log Analytics workspace
az monitor log-analytics workspace create \
--resource-group cl-production \
--workspace-name cl-logs \
--location eastus2

LOG_ANALYTICS_ID=$(az monitor log-analytics workspace show \
--resource-group cl-production \
--workspace-name cl-logs \
--query customerId -o tsv)

LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group cl-production \
--workspace-name cl-logs \
--query primarySharedKey -o tsv)

# Create Container Apps environment
az containerapp env create \
--resource-group cl-production \
--name cl-environment \
--location eastus2 \
--logs-workspace-id $LOG_ANALYTICS_ID \
--logs-workspace-key $LOG_ANALYTICS_KEY

Configure Azure Files storage mount

STORAGE_KEY=$(az storage account keys list \
--resource-group cl-production \
--account-name clstorageaccount \
--query '[0].value' -o tsv)

az containerapp env storage set \
--resource-group cl-production \
--name cl-environment \
--storage-name clstorage \
--azure-file-account-name clstorageaccount \
--azure-file-account-key $STORAGE_KEY \
--azure-file-share-name cl-documents \
--access-mode ReadWrite

Deploy cl-backend

az containerapp create \
--resource-group cl-production \
--name cl-backend \
--environment cl-environment \
--image clregistry.azurecr.io/cl-backend:latest \
--registry-server clregistry.azurecr.io \
--target-port 8000 \
--ingress internal \
--min-replicas 1 \
--max-replicas 5 \
--cpu 1.0 \
--memory 2.0Gi \
--env-vars \
"APP_ENV=production" \
"LOG_LEVEL=INFO" \
"MAX_UPLOAD_SIZE_MB=100" \
"POSTGRES_USER=cl_user" \
"POSTGRES_PASSWORD=secretref:postgres-password" \
"POSTGRES_DB=contract_lucidity" \
"POSTGRES_HOST=cl-postgres.postgres.database.azure.com" \
"POSTGRES_PORT=5432" \
"REDIS_URL=rediss://:<redis-key>@cl-redis.redis.cache.windows.net:6380/0" \
"CELERY_BROKER_URL=rediss://:<redis-key>@cl-redis.redis.cache.windows.net:6380/0" \
"CELERY_RESULT_BACKEND=rediss://:<redis-key>@cl-redis.redis.cache.windows.net:6380/1" \
"JWT_SECRET_KEY=secretref:jwt-secret" \
"JWT_ALGORITHM=HS256" \
"JWT_ACCESS_TOKEN_EXPIRE_MINUTES=60" \
"JWT_REFRESH_TOKEN_EXPIRE_DAYS=7" \
"STORAGE_PATH=/data/storage" \
"CORS_ORIGINS=https://your-domain.com" \
"FRONTEND_URL=https://your-domain.com" \
"DEFAULT_ADMIN_EMAIL=admin@your-domain.com" \
"DEFAULT_ADMIN_PASSWORD=secretref:admin-password" \
--secrets \
"postgres-password=keyvaultref:https://cl-keyvault.vault.azure.net/secrets/postgres-password,identityref:system" \
"jwt-secret=keyvaultref:https://cl-keyvault.vault.azure.net/secrets/jwt-secret,identityref:system" \
"admin-password=keyvaultref:https://cl-keyvault.vault.azure.net/secrets/admin-password,identityref:system"

Deploy cl-worker

az containerapp create \
--resource-group cl-production \
--name cl-worker \
--environment cl-environment \
--image clregistry.azurecr.io/cl-worker:latest \
--registry-server clregistry.azurecr.io \
--ingress disabled \
--min-replicas 1 \
--max-replicas 3 \
--cpu 2.0 \
--memory 4.0Gi \
--env-vars \
"APP_ENV=production" \
"LOG_LEVEL=INFO" \
"CELERY_CONCURRENCY=4" \
"POSTGRES_USER=cl_user" \
"POSTGRES_PASSWORD=secretref:postgres-password" \
"POSTGRES_DB=contract_lucidity" \
"POSTGRES_HOST=cl-postgres.postgres.database.azure.com" \
"POSTGRES_PORT=5432" \
"REDIS_URL=rediss://:<redis-key>@cl-redis.redis.cache.windows.net:6380/0" \
"CELERY_BROKER_URL=rediss://:<redis-key>@cl-redis.redis.cache.windows.net:6380/0" \
"CELERY_RESULT_BACKEND=rediss://:<redis-key>@cl-redis.redis.cache.windows.net:6380/1" \
"STORAGE_PATH=/data/storage" \
--secrets \
"postgres-password=keyvaultref:https://cl-keyvault.vault.azure.net/secrets/postgres-password,identityref:system"

Deploy cl-frontend

BACKEND_FQDN=$(az containerapp show \
--resource-group cl-production \
--name cl-backend \
--query properties.configuration.ingress.fqdn -o tsv)

az containerapp create \
--resource-group cl-production \
--name cl-frontend \
--environment cl-environment \
--image clregistry.azurecr.io/cl-frontend:latest \
--registry-server clregistry.azurecr.io \
--target-port 3000 \
--ingress external \
--min-replicas 1 \
--max-replicas 10 \
--cpu 1.0 \
--memory 2.0Gi \
--env-vars \
"BACKEND_INTERNAL_URL=https://$BACKEND_FQDN" \
"NEXT_PUBLIC_FRONTEND_URL=https://your-domain.com"

Mount Azure Files to Backend and Worker

# Add volume mount to cl-backend
az containerapp update \
--resource-group cl-production \
--name cl-backend \
--set-env-vars "STORAGE_PATH=/data/storage"

# Note: Volume mounts via Azure Files are configured in the YAML spec.
# Export, modify, and re-apply:
az containerapp show --resource-group cl-production --name cl-backend -o yaml > cl-backend.yaml

Edit cl-backend.yaml to add the volume mount under template:

template:
volumes:
- name: cl-storage
storageName: clstorage
storageType: AzureFile
containers:
- name: cl-backend
volumeMounts:
- volumeName: cl-storage
mountPath: /data/storage

Apply for both backend and worker:

az containerapp update --resource-group cl-production --name cl-backend --yaml cl-backend.yaml
# Repeat for cl-worker with the same volume configuration

Step 8: Custom Domain and SSL

# Add custom domain to frontend
az containerapp hostname add \
--resource-group cl-production \
--name cl-frontend \
--hostname your-domain.com

# Bind managed certificate
az containerapp hostname bind \
--resource-group cl-production \
--name cl-frontend \
--hostname your-domain.com \
--environment cl-environment \
--validation-method CNAME

Configure your DNS provider:

  • CNAME: your-domain.com -> cl-frontend.<region>.azurecontainerapps.io
  • Or use Azure Front Door for enterprise-grade traffic management and WAF

Cost Estimate

Estimated monthly costs (as of March 2026) for a production deployment in East US 2:

ServiceSpecificationEstimated Monthly Cost
Container Apps (frontend)1-5 replicas, 1 vCPU / 2 Gi~$60
Container Apps (backend)1-5 replicas, 1 vCPU / 2 Gi~$60
Container Apps (worker)1-3 replicas, 2 vCPU / 4 Gi~$80
PostgreSQL Flexible ServerStandard_B2s, 64 GB~$65
Azure Cache for RedisStandard C1 (6 GB)~$80
Azure FilesPremium, 100 GB~$16
Container RegistryBasic~$5
Key VaultStandard~$1
Log AnalyticsModerate ingestion~$10
Total~$377/month
Cost Optimization
  • Consumption plan pricing means you only pay when containers are active. Scaling to zero during off-hours can significantly reduce costs.
  • Azure Reservations on PostgreSQL Flexible Server save up to 65% with a 3-year commitment.
  • Azure Hybrid Benefit applies if you have existing Windows Server or SQL Server licenses.
  • Combine with Azure Front Door Standard tier for CDN and WAF at ~$35/month.

Verification

# Check container app status
az containerapp show --resource-group cl-production --name cl-frontend \
--query '{status:properties.runningStatus, url:properties.configuration.ingress.fqdn}' -o table

az containerapp show --resource-group cl-production --name cl-backend \
--query '{status:properties.runningStatus}' -o table

az containerapp show --resource-group cl-production --name cl-worker \
--query '{status:properties.runningStatus}' -o table

# View logs
az containerapp logs show --resource-group cl-production --name cl-backend --tail 50

# Test the application
curl -I https://your-domain.com

AKS Alternative

For larger deployments (50+ concurrent users, strict compliance requirements), consider Azure Kubernetes Service (AKS) instead of Container Apps. AKS provides:

  • Full Kubernetes control plane
  • Node pools with GPU support (for on-premise AI models)
  • Network policies and pod security
  • Helm chart deployment
  • Tighter VNet integration

The trade-off is significantly higher operational complexity. Container Apps is recommended for most Contract Lucidity deployments.

Pricing Disclaimer

Verify current pricing and service availability at azure.microsoft.com/pricing. Azure pricing changes frequently and may vary by region.