Encifher

Processor

Processor service responsibilities, endpoints, data formats, and integrations

Processor

Overview

The processor is the main service orchestrating the secure decrypt → compute → encrypt pipeline using the Core crate. It exposes HTTP endpoints to process batches of operations, re-encrypt results for clients, accept new client-encrypted ciphertexts for indexing, and provides operational routes for health and database maintenance.

Key responsibilities

  • Execute compute operations by coordinating with Core primitives and Threshold KMS for decryption.
  • Build merkle-committed BatchTrees from processed results and push them to the Indexer.
  • Accumulate multiple BatchTree merkle roots into a BN254 accumulator root and periodically submit the signed commitment to the Submitter (DA + on-chain).
  • Provide re-encryption service: after verifying client session and signature, batch-decrypt input ciphertexts and re-encrypt the plaintexts using a client-provided ephemeral public key (ECIES).
  • Maintain durable Sled databases for queueing and key-value data, with explicit periodic flushing and snapshot/restore utilities.

Services

  • API service: process-batch, re-encrypt, timestamp, health.
  • Encryption service: store-ciphertext, health.

Attestation & signing

  • Uses tee::sign::tee_sign to produce ECDSA signatures for batch hashes, merkle roots, and accumulator roots. See official docs for deployment via Marlin TEE and init-params usage.

ACL Workflow

The Processor enforces offchain access control for encrypted handles using the KvDB (ACL database). The following diagram shows the three main ACL operations:

ACL Operation Details:

  1. SetAccess (Grant Permission)

    • Input: accessor pubkey, handle value, allow flag
    • If allow=true: stores permission mapping in KvDB
    • If allow=false: no-op (permission not granted)
  2. Compute (Verify & Auto-grant)

    • Verifies caller has permission via get_handle_permission(signer, value)
    • Executes computation if permission exists
    • Automatically grants ownership of result_handle to signer
  3. IsAllowed (Permission Check)

    • Query-only operation to check if accessor has permission for handle
    • Returns boolean without modifying state

Endpoints

POST /v1/process-batch

  • Request body: BatchData (see Data Formats).
  • Processing steps:
    1. Sign batch_hash with tee_sign by converting the U256 batch hash to string representation; the response includes the signature as ack.
    2. For each Group in order:
      • Tokenize the group expression and map to an Operation/OperationType.
      • Construct OperationParams for the specific operation (arithmetic/relational/control/bitwise/storage, or specialized operations like anon-transfer, confidential↔anon, decrypt).
      • Acquire a cipher client:
        • Local mode (local-cipher feature): in-process Threshold ElGamal client via get_kms_client_local().
        • Production mode: pooled KMS client (get_pooled_kms_client) using threshold_kms.production_peers from config for multi-node threshold decryption.
      • Call Core ops to decrypt, compute, and re-encrypt the result. Grant ACL for the resultant handle to the group signer.
      • Propagate newly produced ciphertext to subsequent groups in the same batch when they reference the resultant handle.
    3. Build a BatchTree from the processed results (leaves are RequestAndCiphertext), compute its merkle root using a fixed 4-level merkle tree, and push the tree to batch_tree_db (QueueDB).
    4. Immediately call the internal Indexer update to push merkle data and leaves. In debug builds, this writes to a local Sled dev_db; in production, it calls Indexer HTTP.
  • Response (200):
    {
      "status": "success",
      "message": "Batch processed successfully",
      "ack": "<hex tee signature over batch_hash>",
      "batch_hash": "<string>",
      "batch_tree_root": "<hex>",
      "groups_processed": <number>
    }

Notes

  • Body size limit for this endpoint is 100MB predefined.
  • ack is the TEE signature over the stringified batch_hash (U256 converted to string).
  • ACL enforcement: when an operand corresponds to a ciphertext with non-zero Big-R, the processor verifies get_handle_permission(kv_db, handle, signer) before using it; missing permission short-circuits the group.
  • Cache: Intermediate plaintext results are cached in memory (HashMap) during batch processing to avoid repeated decryption within the same batch. The cache is backed by a Sled database named cache_db for persistence.

Batch Processing Workflow

The complete flow from receiving BatchData to submitting results shows the orchestration across multiple services:

Processing Steps:

  1. Receive BatchData with groups from Batcher
  2. Sign batch_hash with TEE attestation key
  3. For each group: decrypt operands → compute → re-encrypt result
  4. Build fixed 4-level Merkle tree from processed results
  5. Queue BatchTree for accumulator submission
  6. Push tree and leaves to Indexer for ciphertext storage
  7. Periodic accumulator job submits batches to Submitter when queue ≥ 300 trees
  8. Return acknowledgement with TEE signature

Input: BatchData (from Batcher) Output: Signed batch acknowledgement + BatchTree (queued for submission)

POST /v1/re-encrypt

  • Request body: BatchDecryptRequest.
  • Validation:
    • Ed25519 signature over a deterministic JSON payload hash (ReEncryptPayload), verified with user_pk.
    • Session window checks: start_time_stamp < now < end_time_stamp and max duration within SESSION_VALIDITY_THRESHOLD (3600 seconds = 1 hour).
    • Payload hash construction mirrors SDK JSON.stringify semantics and uses camelCase key names (e.g., ephemeralPubKey).
  • Processing steps:
    1. Fetch the ciphertexts for all handles in batch:
      • Local test mode (checked via config_loader::is_local_test_mode()): read from Sled dev_db.
      • Production: call Indexer's get-ciphertext.
    2. Perform a single batch threshold decryption for all ciphertexts (one threshold session via decrypt_ciphertext_batch).
    3. For each plaintext, perform individual ECIES re-encryption using the client's ephemeral_pub_key (non-batch by design via encrypt_with_pubkey).
  • Response (200):
    {
      "results": [
        {"handle": "<string>", "result": "<hex>", "status": "success"} | {"status": "error", ...}
      ],
      "user_pk": "<hex>",
      "ephemeral_pub_key": "<hex>",
      "session_info": {
        "start_time": <u64>,
        "end_time": <u64>,
        "decryption_type": <0|1>
      }
    }

POST /v1/store-ciphertext (encryption service)

  • Request body: CiphertextStorageRequest.
  • Processing steps:
    1. Parse big_r (33-byte point) and cts (hex) and construct CiphertextWithCts.
    2. Compute a deterministic handle: handle = append_type(first_16_bytes(SHA256(cts || big_r)), data_type).
      • SHA256 hash is computed by updating hasher with cts string then big_r string.
      • First 16 bytes converted to u128 via little-endian interpretation.
      • Data type appended via append_type to produce final handle.
    3. Sign the CiphertextWithCts hash via tee_sign and forward to Indexer set-ciphertext.
    4. Return the stringified handle.
  • Response (200): handle as JSON string.

Notes

  • big_r must be exactly 33 bytes; invalid sizes are rejected.

GET /v1/timestamp

  • Returns the current processor time in milliseconds and an ISO string.

GET /v1/health

  • Health document with uptime, features (local-cipher vs KMS), capabilities, and build system info.

Batch Accumulation & Submission

  • Staging: each BatchTree is pushed into batch_tree_db after processing.
  • Periodic job: accumulate_and_send runs continuously in a loop with processor.submitter_submission_interval second intervals:
    • Attempts to pop up to BATCH_TREE_LENGTH (constant value 300) trees from batch_tree_db. The loop stops early if the queue is empty before reaching 300 trees.
    • Adds each merkle root into a BN254 accumulator via accumulator.add_member.
    • Produces a 32-byte accumulator root by SHA-256 hashing the textual accumulator field (converted to string via to_string()).
    • Signs the root with tee_sign (signature + recovery id).
    • Builds BatchTreeSubmission { acc, signature, recovery_id, batch_trees, da_submission_id: None } and POSTs to Submitter submit_batch.
    • On success, updates in-memory AccumulatorState by writing the new accumulator.

Notes

  • The da_submission_id field is set to None in the processor; the submitter populates this field after DA submission.

Indexer Update

  • Immediate update: after constructing a BatchTree, the processor invokes an Indexer update.
  • Payload: SetTreeAndLeavesRequest containing leaf_hashes, merkle_root, signature (TEE signature over root), and batch_requests with each PreComputedCiphertextRequest (ciphertext, hash, cts, resultant handle, leaf hash, index, original request).
  • Mode:
    • Debug/dev: write each resultant handle → ciphertext to local Sled dev_db.
    • Production: POST to Indexer’s set-batch-tree route.

Server

  • Framework: warp.
  • Two services on separate ports from config:
    • API service: process-batch, re-encrypt, timestamp, health, DB utilities.
    • Encryption service: store-ciphertext, health.
  • Binding: listens on 127.0.0.1:<processor.processor_service_port> and 127.0.0.1:<processor.encryption_service_port>.
  • Durability: periodic DB flush task runs every 30 seconds across batch_tree_db, indexer_db, and kv_db.
  • Graceful shutdown: flushes all databases before exit.

Public Interfaces

HTTP

  • API service routes:
    • POST /v1/process-batch — body: JSON → BatchData.
    • POST /v1/re-encrypt — body: JSON → BatchDecryptRequest.
    • GET /v1/timestamp, GET /v1/health.
  • Encryption service routes:
    • POST /v1/store-ciphertext — body: JSON → CiphertextStorageRequest.
    • GET /v1/health.

KMS/cipher

  • Trait: Cipher (encrypt, decrypt, decrypt_batch) implemented by local Threshold ElGamal (test) and distributed KMS client.
  • Pooling: PooledKMSClient smart pointer with RAII automatic return to pool to reduce per-request overhead (default pool size 5).
    • Pool initialized once via lazy_static! and OnceLock pattern.
    • PooledKMSClient implements Cipher trait via delegation to inner KMSClient.
    • Automatic cleanup on drop returns client to pool or frees it if pool is full.
  • Mode detection:
    • Local test mode: uses get_kms_client_local() which returns a static ThresholdElgamalCipher stored in OnceLock.
    • Production mode: uses get_pooled_kms_client with configured production_peers.
  • Legacy fallback: if production peers are not found in config, the processor falls back to loading a legacy client JSON from CONFIG_FILE (default config.json).

Indexer integration

  • Local utilities: local_fetch_ciphertext, local_fetch_multiple_ciphertexts are internal helper functions for debug mode that read from dev_db (not exposed API methods).
  • Push: set-ciphertext and set-batch-tree via send_to_indexer module.

Submitter integration

  • Accumulator job posts BatchTreeSubmission to Submitter /v1/submit_batch with retry handled at the Submitter.

TEE signing

  • tee::sign::tee_sign(&str) -> (Vec<u8>, u8) returns signature bytes and recovery id. Used for batch hashes, merkle roots, and accumulator roots.

Storage

  • QueueDB (Sled) under batch_tree_db and indexer_db.
  • KvDB (Sled) under kv_db for ACL and app state.
  • cache_db (Sled): opened by the Cache struct but primary caching uses in-memory HashMap for handle → plaintext mappings to avoid repeated decrypts within processing. Sled provides persistence backing.
  • DB directories are created under the working directory: batch_tree_db/, indexer_db/, kv_db/, and cache_db/.
  • Debug mode: additional dev_db (Sled) stores ciphertexts for local testing without running a separate indexer service.

Configuration

Processor (config.toml)

[processor]
processor_service_port = 8081
encryption_service_port = 8083
host = "127.0.0.1"
submitter_submission_interval = 300
indexer_submission_interval = 300
indexer_url = "http://127.0.0.1:8080"
submitter_url = "http://127.0.0.1:8082"

  [processor.backup]
  enabled = false
  interval_seconds = 0
  prefix = ""
  restore_on_boot = false

Threshold KMS

[threshold_kms]
mode = "local-test" # or "production"

  # Only for production
  [[threshold_kms.production_peers]]
  id = 1
  ip = "10.0.0.2"
  rpc_port = 7000

  [[threshold_kms.production_peers]]
  id = 2
  ip = "10.0.0.3"
  rpc_port = 7000

Local test short‑circuits to an in‑process cipher. Production creates pooled KMS clients using the configured peers.

Indexer

[indexer]
url = "http://127.0.0.1:8080"
port = 8080

Submitter

[submitter]
port = 8082

The processor uses processor.submitter_url for HTTP calls to submitter.

Chain context

[chain]
chain_id = 84532
acl_address = "0x..."

TEE & Marlin

[tee]
mock_signature = true
# app_id, image_variant, domain_string, build_hash, image_id, kms_url are optional

Official deployment mounts init‑params at /init-params; see officialDocs/notionDocs.

Environment

  • Feature local-cipher switches to local encryption/decryption for development and tests.
    • When enabled: uses in-process ThresholdElgamalCipher initialized once via OnceLock.
    • When disabled: uses distributed KMS clients from pool with configured production peers.
  • Configuration loading: CLI arguments for chainid and acl_address are parsed but overridden by values from config.toml.

Error Handling

  • HTTP returns warp::reject::custom(..) with a concise message. Custom error types: ApiCustomError, HexDecodeError, plus structured validation errors for the re-encryption route.
  • Re-encryption returns 400 with JSON error details for signature/session failures.
  • DB snapshot/restore validates payloads, uses tar.gz with safe path handling and atomic rename.
  • Periodic DB flush logs errors per DB and continues without halting the service.
  • Invalid inputs are rejected early (e.g., big_r byte length validation, signature verification failures).

Tests

  • Unit tests (processor/tests)
    • Arithmetic, comparison, bitwise, complex ops
    • Anon transfer, confidential↔anon flows, decrypt ops
    • Verify ciphertext and set access
  • E2E test (final_e2e_test.rs, local-cipher feature)
    • Verifies services up, TEE signing, Indexer store/fetch, process-batch, timestamp, and mocked DA + Solana submission via chain client.
  • Re‑encryption validation tests
    • Payload structure, Ed25519 signature, hash determinism, end‑to‑end validation pipeline.

Notes

  • No WebSocket interface; all routes are HTTP under /v1.
  • Public keys and request encodings
    • scalar_byte usage follows Core's conventions; batch decryption uses scalar bytes of zero by default.
  • Attestation
    • The processor does not perform attestation verification itself; signing and enclave-bound secrets are managed by the TEE runtime.
  • Merkle tree construction
    • All batch trees use a fixed depth of 4 levels (hardcoded in get_batch_tree_from_batch).
  • Cache architecture
    • In-memory HashMap stores intermediate plaintext results during batch processing.
    • Sled cache_db provides persistence backing for the cache.
  • Debug mode differences
    • Uses local dev_db for ciphertext storage instead of calling indexer HTTP endpoints.
    • Enables immediate indexer updates after batch tree construction.