Platform Technical Documentation
Platform Architecture Plan
Docs#MetaQuantum Platform Architecture & Delivery Plan
Last updated: {{DATE}}
#Execution Status
#1. Vision Overview
- Objective: Evolve the MetaQuantum network from a secure identity layer into a production-ready infrastructure platform comparable to Arbitrum, Alchemy, AltLayer, Ankr, Caldera, Conduit, Gelato, and Zeeve.
- Guiding Principles:
- Unified cryptographic identity across every surface.
- Deterministic Layer2 execution with auditable data availability.
- Developer-first experience: documented APIs, SDKs, usage insights, and responsive support.
- Operational excellence: observability, automation, and repeatable delivery pipelines.
#2. Service Fabric Overview
Status: Ownership across all domains has been defined and the majority of core services are live. Layer2 now submits batches on-chain, persists DA payloads, and tracks forced inclusion queues, while the developer API key lifecycle and telemetry endpoints are exposed. Remaining gaps: enforce signature verification, finalize documentation, and wire dashboards/alerts to the new metrics.
| Domain | Major Components | Responsibilities | Key Owners |
|---|---|---|---|
| Identity & Access | session.rs, quantum_security.rs, Quantum Keys store |
PQC registration, multi-device trust, signature validation | Identity squad |
| Layer2 Execution | layer2.rs, smart_user_contract.rs, batch store (data/availability) |
Sequencing, proof aggregation, forced inclusion queue, WASM/EVM execution | Layer2 squad |
| Financial Systems | mqusdt_ledger.rs, mqpy_*, Stripe integration |
MQPY ledger, staking, fiat on/off ramps, cashback | Finance squad |
| Messaging & Communication | qcom_*, qmail.rs, notification_engine.rs |
Real-time comms, email bridge, notification streams | Comms squad |
| Gateway & APIs | service.rs, api_gateway_routes.rs, api.rs |
HTTP/TLS server, REST/RPC endpoints, reverse proxying | Platform squad |
| Observability & Ops | metrics.rs, telemetry.rs, /metrics/performance, tracing hooks |
Telemetry export, runtime checks, admin endpoints | SRE |
#Integration Map
- Identity → Layer2: Newly registered users receive deterministic smart contracts.
SessionManager::register_usercallsCONTRACT_REGISTRYand seeds WASM artefacts. - Layer2 → Gateway: Sequencer updates, forced inclusion queue state, and batch metadata exposed via
service.rs(/layer2/status,/layer2/metrics,/layer2/batches,/layer2/data/{hash}) for UI/API consumption. - Layer2 ↔ Financial: MQPY ledger operations invoked from smart contracts (
apply_user_smart_contract). Rewards and burns feed into Layer2 state deltas. - Gateway → Frontend Consoles: Static consoles (discover/devhub/consensus/contracts) consume the shared dashboard bundle (
dashboard.bundle.js) plus the new developer usage APIs to render API key metrics and Layer2 health. - Observability → Ops: Prometheus registry in
telemetry.rsexports Layer2/bridge counters and health snapshots; dashboards/Alertmanager wiring remains pending. Logs still slated for structured JSON (future). Health checks consumed by CI/CD smoke tests.
#External Integrations
- Stripe (fiat onboarding) — webhook handling, signature verification (
stripewebhook). - IPFS/Hypercore/Tor — data transport and content storage for identity proofs.
- Planned: Ethereum L1 via
ethers-rsfor rollup commit & forced inclusion transactions.
#Recent Platform Updates
- Layer2 batches (
src/layer2.rs): commits now submit to L1 when sequencer secrets are present, persist data-availability snapshots (DATA_AVAILABILITY_DIR), and queue forced inclusion requests with ready timestamps. - Developer + Layer2 APIs (
src/service.rs): Actix scope exposes/layer2/status,/layer2/metrics,/layer2/batches,/layer2/data/{hash}, and/developer/api-keys,/developer/usageendpoints for dashboards and tooling. - Persistent API keys (
src/accounts_db.rs): user records store API key metadata (ApiKeyMetadata) for issuance, revocation, and usage tracking used by the new endpoints. - Secret bootstrap (
src/secret_store.rs):SecretManagernow loads sequencer RPC/private key/inbox data, enabling on-chain batch submission with helpful warnings when unset. - Telemetry registry (
src/telemetry.rs): Prometheus counters/gauges track Layer2 commits, forced inclusion requests, bridge health, and health snapshots for/metricsexposure. - Developer consoles (
static/js/dashboard.bundle.js): UI needs to surface the new API key metrics and Layer2 telemetry cards in the usage view (in progress).
| Endpoint | Method(s) | Description | Backing Module(s) |
|---|---|---|---|
/layer2/status |
GET | Returns Layer2StatusSnapshot with queue, batch, and L1 linkage details. |
service.rs, layer2.rs |
/layer2/metrics |
GET | Provides Layer2MonitoringSnapshot for dashboards. |
service.rs, layer2.rs, telemetry.rs |
/layer2/batches |
GET | Lists batch metadata with explorer URLs and proof availability. | service.rs, layer2.rs |
/layer2/data/{hash} |
GET | Streams persisted data-availability payloads by hash. | service.rs, layer2.rs |
/layer2/forced_inclusion |
POST | Enqueues forced inclusion payload, relaying to SequencerInbox when configured. | service.rs, layer2.rs, secret_store.rs |
/developer/api-keys |
GET / POST | Lists or creates user-scoped API keys with labels. | service.rs, accounts_db.rs |
/developer/api-keys/{key_id} |
DELETE | Revokes an existing API key. | service.rs, accounts_db.rs |
/developer/usage |
GET | Summarizes API key usage counts and embeds Layer2 metrics for the UI. | service.rs, accounts_db.rs, layer2.rs |
#3. Success Metrics
Status: Baseline targets are defined and Prometheus metrics are emitting; the remaining gap is wiring Grafana dashboards and scrape configs (Prometheus/Alertmanager) to surface them.
| Track | Metric | Target (Pilot) | Target (GA) | | --- | --- | --- | --- | | Service Availability | RPC uptime | ≥ 99.5% | ≥ 99.9% | | Sequencer Performance | Batch finality SLA | ≤ 2 min average | ≤ 60 sec average | | Data Availability | Batch DA retrieval success | ≥ 99% | ≥ 99.9% | | Developer Adoption | Active API keys (monthly) | 25 | 150+ | | Developer Experience | Time-to-first successful tx (guided flow) | ≤ 30 min | ≤ 10 min | | Security | PQ signature validation coverage | 100% new sessions | 100% all sessions | | CI/CD Health | Pipeline success rate | ≥ 90% | ≥ 95% |
Telemetry Note: Metrics already flow from telemetry.rs across Layer2, bridge, and developer usage. Next step is publishing Grafana dashboards, alerts, and per-team ownership around the emitted series.
#4. Layer2 + Developer Portal Sprint (2 Weeks)
Status: Sprint backlog drafted; execution partially underway (Layer2 endpoints + telemetry live, signature enforcement and UI wiring still pending).
#Sprint Goals
- Stabilize Layer2 execution with authenticated APIs.
- Ship Developer Portal prototype integrated with live metrics and batch explorer feeds.
#Workstreams & Tasks
#A. Layer2 Stabilization
- Harden batch ingestion & commit flow:
- [x] Implement signature bypass guard (
ALLOW_INSECURE_LAYER2) with clear error messaging. - [x] Persist forced inclusion requests with ready timestamps and expose
/layer2/forced_inclusion; integration tests still pending. - [x] Persist DA payload snapshots (keccak + JSON) and expose
/layer2/data/{hash}. - [ ] Define retention/rotation policy for DA files and add checksum validation jobs.
- [x] Implement signature bypass guard (
- RPC Enhancements:
- [x] Ship
/layer2/status,/layer2/metrics, and/layer2/batchesendpoints (JSON + explorer metadata). - [ ] Add authentication middleware (API keys) and per-route rate limiting.
- [x] Ship
- Reliability:
- [ ] Bench tests for WASM execution (gauge average latency, mem usage).
- [x] Add Prometheus metrics: batch queue length, DA write failure count, proof lag.
#B. Developer Portal Prototype
- UI:
- [x] Extend
dashboard.bundle.jswith tab navigation & copy helpers. - [ ] Wire the new API usage and Layer2 telemetry cards into the developer console.
- [x] Extend
- API Adapter Layer:
- [x] Expose
/developer/api-keys(list/create/delete) and/developer/usageendpoints inservice.rs. - [ ] Hook API key issuance into quota + rotation policy (post-auth middleware).
- [ ] Implement anonymized logging for key operations (to feed usage dashboards).
- [x] Expose
- Documentation:
- [ ] Auto-generate API reference (OpenAPI) and serve via Developer Portal.
- [ ] Publish onboarding guide (registration → contract execution → analytics).
- Testing:
- [ ] Playwright smoke tests covering sheet open, tab navigation, API fetch.
#Deliverables
- Layer2 API spec & Postman collection.
- Developer Portal prototype hosted at
/discover.htmlsheet (integrated). - Sprint summary covering metrics, issues, next steps.
#5. CI/CD & Staging Strategy
Status: Automated via .github/workflows/ci.yml — sequential jobs lint, test, build, and run the staging bundler (scripts/deploystaging.sh).
#Pipeline Stages
- Lint & Format:
cargo fmt --check,cargo clippy -- -D warnings, frontend lint (ESLint). - Unit Tests:
cargo test --all, JS unit tests (Jest/Vitest where applicable). - Integration Tests:
- Spin up ephemeral environment (Docker Compose or cargo test harness) for Layer2 & API.
- Run e2e scenarios: user registration → contract execution → Layer2 batch retrieval.
- Security Gates:
cargo audit,npm audit --production.- Optional: Trivy scan for container images.
- Build & Artifact:
- Produce Rust binary tarballs, WASM bundles, static assets.
- Attach version metadata (git SHA, build time).
- Deploy:
- Staging: handled by the CI workflow (
stagingjob) which packages artifacts throughscripts/deploy_staging.shand publishes them as staging bundles. - Production: manual approval, blue/green or canary rollout with health checks.
- Staging: handled by the CI workflow (
#Environments
| Env | Purpose | Data | Deployment cadence | | --- | --- | --- | --- | | Dev | Local development | Mocked | On-demand | | Staging (Preview) | Integration & QA | Synthetic + scrubbed | On every merge to main | | Production | Live traffic | Real | Manual promotion |
#Staging Features
- Simulated Layer2 chain with deterministic seeds.
- Replayable fixtures for Layer2 batches and ledger events.
- Toggle to enable API key authentication and throttling.
- Observability parity: metrics, logs, alerts configured identical to prod.
- CI job uploads staging bundles under
staging/to simulate deployment hand-off.
#Tooling & Infrastructure
- CI Engine: GitHub Actions/GitLab CI with caching for Cargo & npm.
- Secrets Management: GitHub OIDC + HashiCorp Vault (or AWS Secrets Manager).
- Container Registry: GHCR with signed images (cosign).
- Infrastructure as Code: Terraform or Pulumi modules for staging/prod resources.
- Monitoring: Prometheus + Grafana dashboards, Alertmanager/PagerDuty integration.
#Rollback & Incident Response
- Automated rollback trigger on failed health checks.
- Runbook outlining how to revert to previous deployment, invalidate caches, and notify stakeholders.
- Post-incident review template stored in
/docs/ops/runbooks.
#6. Immediate Next Steps
Status: Architecture document sign-off pending; subsequent actions depend on the consolidated execution status table.
- Sign-off on architecture document with Domain Leads (Identity, Layer2, Platform, SRE).
- Allocate sprint teams and create Jira/Linear tickets per task above.
- Bootstrap CI/CD pipeline: create pipeline config, integrate with repo, set up secret management.
- Wire developer portal UI to
/developer/usageand/layer2/*endpoints, including API key issuance & revocation flows. - Build staging cluster: provision infrastructure, configure environment variables, seed baseline data.
- Schedule check-ins: Daily standup for sprint, weekly platform sync, dedicated SRE handoff session.
#7. Risks & Mitigations
Status: Risk register recorded; mitigation tasks require owner assignment and tracking on the delivery board.
| Risk | Impact | Mitigation |
|---|---|---|
| Layer2 proof generation lag | Delayed finality & developer trust | Use new Layer2 metrics to detect lag, auto-scale proof workers, and publish SLA dashboards. |
| Aggregator signature enforcement pending | Potentially invalid state transitions reaching production | Implement signature verification in Layer2State::verify_signature, add integration tests, and gate ALLOW_INSECURE_LAYER2 to non-prod. |
| Telemetry dashboards incomplete | Slow incident detection despite emitted metrics | Stand up Grafana/Alertmanager, map metrics owners, and define alert thresholds per squad. |
| API abuse / DDoS | Outage, developer dissatisfaction | Enforce API key quotas + rate limits, extend gateway ML heuristics, and surface abuse dashboards. |
| CI pipeline flakiness | Slow releases | Use deterministic test data, cache dependencies, parallelize jobs. |
| Staging drift from production | Ineffective testing | Automate infrastructure provisioning, config-as-code, daily parity checks. |
| Documentation rot | Slow onboarding | Tie docs generation and validation into CI, assign doc owners. |
#8. References
docs/layer1_integration_plan.mdsrc/layer2.rs,src/service.rs,src/accounts_db.rs,src/secret_store.rs,src/telemetry.rs,src/session.rsstatic/discover.html,static/js/dashboard.bundle.js,static/js/home_portal.js
---
Prepared by: MetaQuantum Platform Team
---
#9. Module Inventory (Living)
> Scope This section is maintained as a living inventory and updated each release. Starting with the foundational groups helps track ownership and work streams before covering the remaining modules.
#9.1 Core Processing & Protocols
Status: Ownership identified for every module; consolidation work scheduled for Hypercore and MQN bridge components.
| Module | Path | Responsibility | Current Status | Owner / Notes |
|---|---|---|---|---|
core |
src/core.rs |
Core request/task orchestration inside the node | ✅ Stable | Platform — Defines message flow across subsystems. |
hypercore |
src/hypercore.rs |
Hypercore integration for identity/data storage | 🟡 Needs tidy-up | Identity — Align write/read patterns with layer0. |
kademlia |
src/kademlia.rs |
Peer discovery via Kademlia DHT | 🟡 In progress | Networking — Expose metrics and tie into monitoring. |
mqn_bridge |
src/mqn_bridge.rs |
MQ Network bridging to external networks | ⚠️ Under maintenance | Bridge squad — Document message flows and retry logic. |
mqp |
src/mqp/mod.rs |
MQP core message processor | ✅ Stable | Platform — Integrated with qcom_mqp. |
mqusdt_ledger |
src/mqusdt_ledger.rs |
mqUSDT ledger reconciliation | ✅ Stable | Finance — Track ledger activity in developer dashboards. |
network_integration |
src/network_integration.rs |
Glue between services and networking stack | 🟡 Needs organization | Platform — Review dependencies and document entry points. |
node_key |
src/node_key.rs |
Node key generation and management | ✅ Complete | Identity — Ensure secure storage practices. |
quantum_gateway |
src/quantum_gateway.rs |
Gateway between quantum apps and core services | 🟡 Consolidating | Quantum — Link outputs into discover/devhub consoles. |
telemetry |
src/telemetry.rs |
Prometheus registry + health snapshots for bridge/Layer2 | 🟡 Instrumented | Platform — Wire dashboards/Alertmanager to emitted metrics. |
route |
src/route.rs |
libp2p Swarm behaviour (MyBehaviour) |
✅ Stable | Networking — Surface events inside SRE dashboards. |
#9.2 Layered Architecture
Status: Layer0 is fully wired; Layer1 scaffolding exists; Layer2 needs the consolidation tasks tracked in the sprint.
| Module | Path | Responsibility | Current Status | Owner / Notes |
|---|---|---|---|---|
layer0 |
src/layer0.rs |
Verifiable identity storage layer | ✅ Stable | Identity — Integrated with SessionManager and Hypercore. |
layer1 |
src/layer1.rs |
Hooks for future L1 contract integration | 🟡 Scaffolding | Layer2/Bridge — Awaiting L1 deployment and ethers-rs setup. |
layer2 |
src/layer2.rs |
Rollup engine, batch management, proof aggregation | ⚠️ Consolidation required | Layer2 — Align APIs, add monitoring, document forced inclusion workflows. |
The remaining module groups (Ledger & ZK, Data Access, Sessions & Security, etc.) will be documented in upcoming revisions. Use these tables as the authoritative reference when triaging work or assigning owners.
#9.3 Developer & Surface APIs
Status: Public-facing endpoints now expose Layer2 status and developer API key lifecycle; quotas, dashboards, and UI integration still on the roadmap.
| Module | Path | Responsibility | Current Status | Owner / Notes |
|---|---|---|---|---|
service |
src/service.rs |
Actix server wiring Layer2 + developer endpoints | 🟡 In progress | Platform — Add rate limiting/quota enforcement and document new routes. |
accounts_db |
src/accounts_db.rs |
Persistent user store & API key metadata | 🟡 Updated | Platform — Ensure rotation/expiry policies land with quota work. |
secret_store |
src/secret_store.rs |
Bootstrap sequencer/bridge secrets for runtime services | 🟡 Ready | SRE — Provide prod/staging secret configs and monitor warnings for missing keys. |
static/js/dashboard.bundle.js |
static/js/dashboard.bundle.js |
Developer console bundle consuming new telemetry | 🟡 UI wiring | Frontend — Hook API usage & Layer2 cards into the refreshed dashboard view. |
Consensus
Docs#MetaQuantum Consensus Wiring (Draft)
#PubSub Topics
mqconsensusproposal: broadcasts full block proposals as JSON-encodedConsensusMessage::Proposal.mqconsensusvote: carries quorum votes with validator id, block hash، والارتفاع المستهدف.
#Runtime Hooks (حاليًا)
runswarmيشترك في مواضيع الإجماع ويحوّل أي رسالة يتم استلامها إلىConsensusState::handlemessage.ConsensusState::handlemessageيقوم فقط بتسجيل الحدث وتخزين الكتل المقترحة في Hypercore لأغراض الرصد.- دالة
publishconsensusmessageمتاحة لإرسال الرسائل عبر Gossipsub، ويجري استدعاؤها حاليًا عندما يكون متغير البيئةVALIDATORIDمساويًا للمقترح المتوقع فتقوم العقدة ببث كتلة موقعة بمفاتيحconfig/validatorkeys.toml.
#الحالة
- ✅ بناء جدولة للمقترحين تربط بين
expectedproposerوهوية العقدة المحليّة عبر مخزن دوري للحصص والارتفاعات القادمة. - ✅ فرض توقيع كل من المقترحات والأصوات مع رفض الرسائل الخالية من التوقيع أو غير المتطابقة.
- ✅ إدارة أصوات الإجماع عبر
registervoteمع طابور للرسائل المتأخرة بدل المسار القديم المباشر. - ✅ استخدام
lastlocalproposalلإعادة تهيئة المقترحات بعد إعادة التشغيل وإطلاق محاولات إعادة التصويت عند الحاجة. - ✅ إضافة اختبارات تكامل بثلاث عقد للتحقق من نشر المقترح وتحصيل الأصوات والوصول إلى النصاب.
> Status Note: هذه الوثيقة مبدئية لتوضيح حالة الربط الحالي، وسيتم تحديثها مع اكتمال طبقة الإجماع.
#تدفق الرسائل الحالي
| المرحلة | الرسالة | المرسل | المستلمون | السلوك الحالي | |---------|---------|--------|------------|----------------| | 1 | ConsensusMessage::Proposal | المقترح المتوقع | جميع العقد عبر mqconsensusproposal | تخزين الكتلة في Hypercore وتحديث lastlocalproposal. | | 2 | ConsensusMessage::Vote | كل عقدة تتحقق من المقترح | جميع العقد عبر mqconsensusvote | يتم التحقق من التوقيع، ثم تمرير التصويت إلى registervote. | | 3 | Commit (قيد التنفيذ) | عقدة تصل للنصاب | طبقة التخزين / Layer2 | جارٍ تصميم تدفق الالتزام، حالياً يجري فقط التسجيل في السجلات. |
المراحل 1 و2 مستقرة، بينما المرحلة 3 تحتاج ترجمة واضحة إلى تحديث جذور الحالة ونشر الكتل إلى الـ ledger.
#خطة الإتمام
- Commit Hook: تنفيذ دالة
finalize_blockالتي تُحدِّث حالةConsensusStateوتكتب بيانات الكتلة النهائية إلى التخزين. - إعادة المحاولة: توثيق سياسة إعادة إرسال المقترحات/الأصوات عند الانقطاعات وفترات الانتظار.
- الرصد: إضافة عدادات Prometheus/
/metrics/performanceلعدد المقترحات والفشل في النصاب. - التوثيق: تحديث هذا المستند عند اكتمال كل خطوة وإضافة رسم تسلسلي يوضح حالة العقدة (Idle → Proposal → Voting → Commit).
Deploy Layer1
Docs#Layer1 Deployment & Testing Guide
This guide documents compiling, testing, and deploying the contract suite (SequencerInbox, ChallengeOutbox, BridgeEscrow) using the Hardhat workspace bundled with the repository.
#1. Prerequisites
- Dependency installation in
qdeepproject/hardhat:
cd hardhat
npm install
- Copy
.env.exampleat the repository root to.envand fill in the values:
cp ../.env.example ../.env
Required configuration fields:
MQPYTOKENADDRESS– ERC20 backing the bridge (e.g., deployedMQPYToken).SEQUENCERRPCURL,SEQUENCERPRIVATEKEY,SEQUENCERINBOXADDRESS, etc.BRIDGEESCROWADDRESS/BRIDGEESCROWRPCURLfor the watcher.
#2. Compile & Test
Contract compilation: (sources are taken from ../contracts):
npm run compile
Unit tests (Hardhat network by default):
npm test
Test scope:
- Batch submission / forced inclusion workflow (
SequencerInbox). - Queueing, challenging, and resolving withdrawals (
ChallengeOutbox). - Deposit / withdrawal events (
BridgeEscrow).
#3. Deploy to a Network
Ensure the deployer account has sufficient funds on the target network (e.g., Sepolia). Execute:
npm run deploy:sepolia
Deployment script summary:
- Deploys
SequencerInboxwith challenge/forced inclusion parameters. - Deploys
ChallengeOutbox, linked to the inbox. - Deploys
BridgeEscrow, linked toMQPYTokenand the outbox. - Writes a summary JSON file to
hardhat/artifacts/layer1-deployment.json.
For alternative networks, add the configuration to hardhat.config.js (network section) and create a matching npm script.
#4. Wiring Back Into Rust
Post-deployment steps:
- Update
.envwith the live addresses (SEQUENCERINBOXADDRESS,BRIDGEESCROWADDRESS, etc.). - Restart the Rust node so
Layer2Stateandbridgewatcherpick up the new configuration. - Visit
/layer2/batchesand the developer hub (static/devhub.html) to confirm batches are published and DA payloads are accessible. - Trigger a sample deposit on
BridgeEscrowand verifybridgereserve.rsreflects the event.
This flow validates the Layer1 contracts are verified end-to-end before moving on to validity/fraud proof integration.
Layer1 Integration Plan
Docs#Layer1 Integration Plan
This plan defines the high-level wiring required to connect the new Layer1 contracts to the existing Rust services.
#Rust Modules To Touch
src/layer2.rs
- Extend the
AggregatedProofstruct with:
batchid (matches SequencerInbox ID).
dahash (data availability commitment).
Optional proofcommitment.
- When committing a rollup, call an async helper that:
Signs a transaction to SequencerInbox.submitBatch.
batchid, timestamp, finalizationdeadline.
- Track forced-inclusion requests and update the queue when
ForcedInclusionExecutedis observed.
src/bridgereserve.rs
- Map the existing ledger events to on-chain deposits/withdrawals:
Emit BridgeEscrow.deposit when Layer2 records a user deposit.
BridgeEscrow.completeWithdrawal.
- Maintain a bijection between on-chain IDs (
depositId/withdrawalId) and current reserve entries.
src/service.rs/ RPC layer
- Expose endpoints to:
Request forced inclusion (calls SequencerInbox.requestForcedInclusion).
- Subscribe to contract events via WebSocket (e.g.,
ethers-rsoralloy) to update local state.
src/quantummask.rsand user-facing logic
- Surface the
depositId/withdrawalIdto the UI so users can verify settlement on Layer1.
#Helper Crates / Libraries
Use ethers-rs (already in the dependency graph) or add it to:
ethers = { version = "2", default-features = false, features = ["abigen", "contract", "ethers-solc"] }
Generate bindings for the new contracts with abigen! macros to streamline calls.
#Environment Variables
Set the following variables for the sequencer node (e.g., in .env or the process runner):
| Variable | Description | | --- | --- | | SEQUENCERRPCURL | HTTPS endpoint of the Ethereum JSON-RPC provider (Sepolia recommended for testing). | | SEQUENCERPRIVATEKEY | Hex-encoded private key of the sequencer signer (without 0x). | | SEQUENCERCHAINID | Numeric chain ID (optional, defaults to 11155111 for Sepolia). | | SEQUENCERINBOXADDRESS | Deployed address of SequencerInbox. | | SEQUENCERCHALLENGEWINDOWSECS | Challenge period for batches (optional). | | SEQUENCERFORCEDELAYSECS | Minimum delay before forced inclusion can be executed (optional). |
#Data Availability Hook
At batch creation time:
- Serialize the Layer2 frame (transactions + per-user contract state changes).
- Store the blob under
DATAAVAILABILITYDIR(defaultdata/availability). - Compute
keccak256of the blob and pass asdataAvailabilityHashtosubmitBatch. - Persist the mapping
{batchid -> datastore pointer}for auditors and expose through/layer2/batches+/layer2/data/{hash}.
#Challenge Flow Stub
Until fraud/validity proofs land:
- Implement a guardian service that can call
flagBatchmanually when inconsistencies are detected. - Add metrics to detect not-yet-finalized batches that exceed
challengeWindow + grace. - Bridge watchers can be enabled by setting
BRIDGEESCROWADDRESS(and optionalBRIDGEESCROWRPCURL) to sync on-chain deposits withbridgereserve.rs.
This integration plan should be executed after writing deployment & ABI scripts for the Solidity contracts above.
#Per-User Smart Contract Migration
- Solidity Template: أنشئ عقداً أساسياً (
UserProfileContract) يجسّد المنطق الحالي للحرق والتوجيه وحدود التداول. يجب أن يدعم إعدادات JSON المشابهة لـSmartUserContract. - تهيئة آلية النشر: استخدم مصنعاً (Factory) أو عقود قابلة للاستنساخ لضمان نشر عقد جديد عند تسجيل كل مستخدم مع ربط العنوان بحساباته الحالية.
- تدفق Rust: عدِّل
SessionManager::registeruserوapplyusersmartcontractليقوما بتهيئة العقد على Layer1 واستدعاء واجهاته بدلاً من تنفيذ المنطق محلياً. - الحوكمة والأمان: أضف اختبارات Solidity، وتحقق من ضبط الأذونات (مالك/حوكمة) بحيث لا يمكن تعديل قواعد المستخدم دون موافقته.
- التدقيق والتوثيق: بعد اكتمال النقل، حدّث خانة الوثائق الأمامية والخلفية لتوضيح أن المنطق أصبح على السلسلة مع توفير ABI وروابط التحقق.
Security Checks
Docs#Security Checks
Use scripts/runsecuritychecks.sh to run static linting and dependency audits before publishing new builds or Testnet releases.
scripts/run_security_checks.sh
The script runs:
cargo clippy --all-targets --all-features -- -D warnings
- Requires the Clippy component. Install once via:
rustup component add clippy
cargo fmt --all -- --checkcargo audit
- Install with
cargo install cargo-auditif missing.
cargo deny check
- Install with
cargo install cargo-denyif missing.
All steps must pass without warnings before approving Testnet deployments.
#عناصر التحكم المكمّلة
- تم تعطيل مسار استرجاع مفاتيح الطوارئ المباشر (
/getemergencykeyvalueendpoint) ويجب الاعتماد على تدفقات الاسترداد المصادقة فقط. - خدمة معاينة الصفحات الآن ترفض أي نطاقات محلية أو خاصة افتراضياً؛ عدّل
RENDERALLOWEDHOSTSبحذر وراجعdocs/renderpreview.md. - عميل الجلب الخارجي في Rust (
RequestManager::fetchquic) يعتمد علىreqwestمع مهلة 15 ثانية وحد أقصى للاستجابة 2MB، ما يقلل مخاطر الاستنزاف أو الحجب.
أضف هذه النقاط لقائمة التحقق قبل أي إطلاق: تأكد من أن إعدادات البيئة تعكس السياسات الجديدة وأن سجلات الوصول تُراجع عند تمكين نطاقات إضافية.
Performance Metrics
Docs#Performance Metrics API
#Overview
The MQPYChain node now exposes runtime performance statistics that are collected directly from the ledger-backed transaction pipeline and consensus commit path.
- Endpoint:
GET /metrics/performance - Format: JSON
- Source:
PerformanceMetricsaggregator shared by Layer1, ConsensusState, and the HTTP layer.
#Fields
| Field | Description | |-------|-------------| | totaltransactions | Cumulative count of processed Layer1 transactions. | | totalblocks | Total finalized blocks persisted through consensus or standalone mode. | | totalvolumemqpy | Sum of transferred MQPY volume (tokens) recorded in ledger micro-units. | | totalfeesmqpy | Total MQPY fees collected since startup. | | avgtxlatencyms | Moving average latency (milliseconds) from transaction intake to confirmation. | | maxtxlatencyms | Peak observed latency for any individual transaction. | | avgblocktimems | Average inter-block interval derived from recent consensus commits. | | rollingtps | Rolling transactions-per-second computed from the recent block sample window. | | rollingblocktimems | Rolling block time calculated across the same sample window. | | lasttxtimestamp | Wall-clock timestamp of the last confirmed transaction. | | lastblocktimestamp | Wall-clock timestamp of the most recent finalized block. | | lastblockheight | Height of the last finalized block. | | storagereadops | Total ledger read operations captured since startup. | | storagewriteops | Total ledger write operations captured since startup. | | storagereadmb | Aggregate volume of data read from ledger storage (megabytes). | | storagewritemb | Aggregate volume of data written to ledger storage (megabytes). | | avgstoragereadms | Average latency per ledger read operation. | | avgstoragewritems | Average latency per ledger write operation. |
#Usage
curl -s http://localhost:8787/metrics/performance | jq
- Use
rollingtpsfor short-term throughput monitoring. avgtxlatencymshelps catch congestion and ledger backpressure.- Combine
totalvolumemqpyandtotalfeesmqpywith explorer data for economic dashboards.
#Integration Notes
- Metrics are stored in-memory only; restart resets the counters.
- Increase the block sample window in
metrics.rs(MAXBLOCK_SAMPLES) if you need longer smoothing. - The endpoint is lightweight and safe to poll frequently for dashboards or alerting systems.
Mqusdt Bridge
Docs#mQUSDT Bridge Ledger & Messaging
#Ledger Overview
- Source:
src/mqusdtledger.rs - Storage layout:
data/accounts/mqpy/mqusdtbalances/<user>.acc(u128 micro units) andmqusdttotalsupply.json. - API surface:
MqusdtLedger::balanceof(user)→ read balance.credit/debit→ increase or decrease balance with supply tracking.transfer→ convenience helper built on credit/debit.- Metrics: read/write calls report byte counts and latency to
PerformanceMetrics(see/metrics/performance).
#Bridge Messaging
OverledgerBridge::sendmqusdteventwrapssendtransactionwith structured payloads.- Envelope format (
BridgeEnvelope):
{
"asset": "MQUSDT",
"action": "Deposit|WithdrawRequest|WithdrawConfirm|WithdrawReject",
"user_id": "<id>",
"amount_micro": 1000000,
"from_chain": "mq.mainnet",
"to_chain": "external.eth",
"external_tx": "0x...",
"metadata": { ... },
"timestamp": 1730000000,
"reserve_id": "92b0c35f-...",
"contract_id": "user123_default"
}
- Helper payload (
MqusdtBridgePayload) ensures all fields are captured before serialisation. - Actions are validated, and the message is serialized via
serdejsonto reuse existing Kademlia transport.
#Reserve Tracking
- Global tracker:
src/bridgereserve.rsmaintainsdata/bridgereserves.jsonwith totals per asset (deposited, withdrawn, pending, external outstanding). - The tracker deduplicates
reserveidvalues to prevent reprocessing and feeds the/api/bridge/reservesendpoint for Proof-of-Reserves reporting. - Withdrawals increase
externaloutstandingmicro, while deposits decrease it and clear pending amounts, keeping wrapped supply aligned with internal balances. - External relayers update reserves by calling
POST /api/bridge/relayer(seesrc/relayer.rs), passing thereserveid, optionalcontractid, amount, and event type (withdrawconfirm,withdrawreject, orburn). src/contractregistry.rspersistsUserContractMetadata(user → contract id/internal address) so every bridge event can be attributed to the correct per-user smart contract.
#Next Integration Steps
- Wire
QuantumMaskdeposit/withdraw flows toMqusdtLedger+send_mqusdt_event. - Extend MQP processor to listen for incoming bridge envelopes and confirm operations.
- Update Portfolio UI to reflect
mqusdt_balance, pending bridge operations, and external tx links.
Mqpy Erc20
Docs#MQPY ERC-20 Listing Package
This document covers the exchange-ready representation of MetaQuantum Pay (MQPY) on EVM chains. Use it when preparing listings or audits.
#Contract Overview
- Source:
contracts/MQPYToken.sol - Standard: ERC-20 with 6 decimal places (micro-units align with the on-ledger MQPY accounting).
- Core features:
- Supply cap configurable at deployment.
- Role-separated governance using
AccessControl. bridgeMint/bridgeBurnflows with deposit and withdrawal identifiers for reconciliation with the MetaQuantum ledger.- Pausable transfers and EIP-2612 permit support to simplify custodian integrations.
- Roles:
DEFAULTADMINROLE: manages other roles, treasury management, and emergency updates.MINTERROLE: controlled monetary expansion (inflation events, incentives).BRIDGEROLE: mint/burn counterpart operations linked to the MetaQuantum network bridge.PAUSERROLE: pause/unpause transfers in emergencies.
#Deployment Checklist
- Prerequisites
- Configure a Hardhat or Foundry workspace with OpenZeppelin Contracts
^5.0.0. - Secure hardware wallets for admin/bridge keys; record ENS or registry entries for public transparency.
- Parameters
admin: cold wallet or multi-sig that owns role assignments.treasury: warm wallet that receives the initial supply and fee flows.initialSupply: amount in micro-units (1 MQPY = 1000000units) minted immediately to the treasury.cap: hard upper bound; set equal to the fully diluted supply you intend to mirror on the target chain.
- Deployment Script Outline
const MQPY = await ethers.getContractFactory("MQPYToken");
const token = await MQPY.deploy(
adminAddress,
treasuryAddress,
initialSupplyMicro,
capMicro
);
await token.waitForDeployment();
- Post-Deployment Actions
- Transfer
BRIDGEROLEandMINTER_ROLEto operational multi-sigs; remove them from deployer EOAs. - Verify contract on the chain explorer (Etherscan, Basescan, etc.).
- Publish ABI and source references for public scrutiny.
- Fund multisigs with native tokens for gas and set up monitoring alerts (e.g., Tenderly, OpenZeppelin Defender).
#Bridge & Treasury Operations
- Synchronize bridge events using
depositId/withdrawalIdfields to map on-ledger actions. - Require user approvals to the bridge contract before invoking
bridgeBurn. - Maintain a real-time reserve report showing:
- MQPY total supply on-chain vs. treasury backing on the MetaQuantum ledger.
- Outstanding withdrawal requests and pending confirmations.
- For additional safety, run bridge logic through a multi-sig or threshold-signature service before calling
bridgeMintorbridgeBurn.
#Exchange Listing Essentials
- Security Audits: Contract review covering role misuse, cap enforcement, pause logic, and bridging functions.
- Compliance Artifacts:
- KYC/AML policies for bridge operators.
- Proof of reserves methodology syncing on-chain vs. MetaQuantum ledger balances.
- Incident response playbook referencing pause controls.
- Liquidity Plan:
- Market-making agreement or in-house AMM provisioning.
- Initial allocation for exchange wallets and incentives.
- Treasury replenishment and bridge fee policy.
- Communication Package:
- Whitepaper excerpt summarizing tokenomics.
- Branding assets and symbol/ticker confirmations.
- API endpoints for live supply and reserve data (
/metrics/performance, ledger exports).
#Next Steps
- Implement automated relayer services in Rust that listen for
BridgeEnvelopeevents and call the ERC-20bridgeMint/bridgeBurnfunctions. - Map bridge operations into your incident logging and compliance dashboards for exchange reporting.
- Coordinate with partner exchanges for integration tests (deposit/withdrawal sandbox drills) before announcing the listing.
Node Setup
Docs#Node Setup Modes
#Full Node
- Set
NODEMODE=full(default). - No validator keys are required.
- Node subscribes to consensus topics but does not propose or sign blocks.
#Validator Node
- Set
NODEMODE=validator. - Provide
VALIDATORID=<id>matching an entry inconfig/validators.toml. - Provide
VALIDATORKEYSPATH(defaults toconfig/validatorkeys.toml). - Generate keys via
cargo run --bin generatevalidatorkeys <id>and place the output in the keys file. - Validator-only endpoints or behaviours may be disabled on full nodes.
- REST differences: validator nodes لا توفر بعض واجهات الـ full node مثل
purchasemqpy,withdrawusd,browsersocket، بينما full nodes تتيح كامل واجهات المستخدم.
#Environment Summary
NODE_MODE=validator|full
VALIDATOR_ID=<id from validators.toml>
VALIDATOR_KEYS_PATH=config/validator_keys.toml
#ملاحظات الشبكة الصادرة
- يستخدم العميل الداخلي
RequestManagerمكتبةreqwestمع مهلة 15 ثانية وحد أقصى لحجم الاستجابة 2MB. إذا احتجت الوصول إلى مصادر أكبر حجماً فقم باستخدام موصل خارجي أو خدمة بروكسي متخصصة. - لضمان الأمان، راجع المتغير
RENDERALLOWEDHOSTSوقيود SSRF الموضحة فيdocs/renderpreview.mdقبل فتح الوصول لوجهات جديدة.
Interface Roadmap
Docs#MQPYChain Interface Roadmap
#Current Structure
staticlegacy/: Snapshot of earlier static pages kept as content and styling references.static/: Minimal HTML templates that remain available for quick previews or fallback builds.frontend/: Vite + React workspace where the new component-driven UI is developed. The app is currently scaffolded with a hero module, value pillars, and feature grid components inspired by the fresh landing design.
#Workflow Guidance
- Design & Component Iteration
- Build reusable React components inside
frontend/src/(e.g., hero blocks, cards, CTAs) and keep styling centralized in CSS modules or design-token files. - Pull copy or assets from
staticlegacyas needed, then refine them to match the updated tone and visuals.
- Asset Migration
- When an image or media file is confirmed for the new UI, move it from
staticlegacy/assetsintofrontend/public(or a dedicated asset pipeline) and update references accordingly. - Remove unused artifacts from
staticlegacyonce they are re-homed to keep the reference set lean.
- Preparing for Tauri
- During development run
yarn devinsidefrontendfor rapid iteration. - When the React build is production-ready, use
yarn buildto emit assets underfrontend/dist. Point the Tauri configuration (src-tauri/tauri.conf.json) to that directory as thedistDir. - Ensure navigation between native screens and web views is clearly defined so Tauri shell components can embed the React build without friction.
#Next Experiments
- Layer in application state management (React Context, Zustand, or Redux) if dashboards or multi-step flows require shared state.
- Prototype wallet connect dialogs or quantum identity verification modals as isolated components for easy reuse in the Tauri shell.
- Expand the component library with tokens and variants that can be exported to design tools or documentation sites.
- Hook the landing dashboards to the new
/metrics/performanceAPI for live TPS, block time, and fee telemetry.
#UX Simplification Roadmap
- Onboarding Flow Refresh
- Replace the current multi-step registration with a guided wizard (identity → wallet setup → trusted devices).
- Provide inline validation, recovery tips, and clear next-step indicators.
- Unified Wallet Console
- Merge MQPY balance, staking, and fiat bridge into a single "Portfolio" view.
- Surface live
/metrics/performancesnapshots (TPS, block time, fee rate) to reinforce network health. - Add contextual tooltips explaining burns, fees, and pending states.
- Integration Playbook
- Document React/Tauri embedding steps with sample code and design tokens.
- Ship a Postman collection /
curlsnippets covering wallet, staking, and metrics endpoints. - Publish UX guidelines to align partner portals with the core experience.
Storage Roadmap
Docs#Ledger Storage Upgrade Roadmap
#1. Baseline Instrumentation (Sprint N)
- ✅ Hooked
AccountsDbLedgerreads/writes into the sharedPerformanceMetricsservice. - ✅
/metrics/performancenow surfaces I/O counts, volume, and average latencies. - [ ] Capture sample baselines under typical workloads (100 TPS synthetic load).
- [ ] Compare ledger latency with Hypercore writes to spot contention.
#2. Short-Term Optimisations (Sprint N+1)
- Batch balance updates per block to reduce fsync frequency.
- Introduce write-back cache for hot accounts with periodic flush.
- Parallelise supply/balance writes via task queue to avoid blocking L1 flow.
#3. Medium-Term Transition (Sprint N+2)
- Evaluate SQLite/
rusqlitebackend using WAL mode for transactional safety. - Prototype RocksDB-backed ledger with column families for balances, logs, and supply.
- Design migration tool to port existing
.accfiles into the chosen backend.
#4. Long-Term Targets (Sprint N+3)
- Replicated ledger service (Raft/Etcd) for validator clusters.
- Adaptive sharding: partition accounts by hash range to scale horizontally.
- Continuous benchmarking suite integrated into CI for regression detection.
#Next Actions
- Run load-test script and capture
/metrics/performancesnapshot. - Decide between SQLite vs RocksDB based on latency/maintenance requirements.
- Draft migration playbook, including backup and rollback procedures.
Render Preview
Docs#Render Preview API
Endpoint: POST /render/preview
Purpose: Generates a sanitized HTML preview and a WebGPU fallback snapshot for the browser rendering pipeline.
Configuration:
- Set
RENDERALLOWEDHOSTS(comma-separated) to control which remote domains may be fetched. Defaults إلىexample.com,mq.network. تقوم الخدمة برفض العناوين المحلية (مثلlocalhost،127.0.0.1) أو الشبكات الخاصة حتى لو أضيفت للقائمة.
Request body:
{
"url": "https://example.com",
"css_urls": ["https://mq.network/styles.css"],
"html": "<optional direct HTML if no URL>",
"css": "optional inline CSS"
}
Usage example (CLI):
curl -s -X POST \
-H "Content-Type: application/json" \
--data '{"url":"https://example.com"}' \
http://localhost:80/render/preview | jq
Response fields:
sanitizedhtml: absolutely positioned HTML ready for embedding in debugging UI.snapshotbase64:data:image/png;base64,URI for the rendered frame (800x600 default).width/height: dimensions of the generated frame.
Security notes:
- Requests تُرفض إذا كان المضيف خارج القائمة المسموح بها، يستخدم بروتوكولاً غير HTTP(S)، أو يحل إلى عناوين محلية/خاصة.
- HTML/CSS attributes are sanitized to strip inline JavaScript before serialization.
Pqc Login Migration
Docs#PQC Login Migration Guide
This document tracks the changes required to complete the transition of the login/signature flow to post-quantum cryptography.
#Server-side status
- The registration pipeline now issues both Ed25519 and Dilithium3 signing
keys, stores them securely in the key vault, and exposes the Dilithium public key in the API response (/quantumregister).
- Login verification accepts a client-provided
algorithmfield
("ed25519" or "dilithium3"). When unspecified it still defaults to ed25519 for backwards compatibility. Administrators can set MQDEFAULTLOGINALG=dilithium3 to opt-in once clients are ready.
- Challenge responses advertise the supported algorithms list so clients can
negotiate progressively.
#Front-end/client work
- Integrate a Dilithium3 signer (e.g. via WebAssembly) capable of producing
detached signatures that match the server-side verifier.
- Prefer the PQC key when available: request challenges, sign them with
Dilithium3, and send { algorithm: "dilithium3", signature: "<base64>" } in /quantumlogin.
- Persist the Dilithium private key with the same level of protection as the
existing Ed25519 key, including export/backup UX.
- Ensure recovery / trusted-device flows can restore both signing schemes.
#Rollout checklist
- Feature flagging – keep
MQDEFAULTLOGINALGated25519until all
critical clients ship the Dilithium signer.
- Client telemetry – instrument the UI to report which algorithm is in use;
watch adoption before flipping the default.
- Documentation – update user-facing guides to highlight the new key pair
and the need for secure storage of the Dilithium secret.
- Sunset plan – once confident, flip the default to
dilithium3and set a
cutoff date for removing Ed25519 support from the login path.
> Owners: Platform security + Front-end teams > Target milestone: PQC GA launch