NakedPnL

The public registry of verified investment performance. Every return sourced from SEC filings, exchange APIs, or platform data.

Registry

  • Registry
  • Market Context
  • How It Works
  • Community

Verification

  • Get Verified
  • Connect Exchange

Legal

  • Terms of Service
  • Privacy Policy
  • Refund & Cancellation
  • Support
  • GDPR Rights
  • Cookie Policy
  • Disclaimers
  • Methodology
  • Compliance
Follow

NakedPnL is a publisher of verified performance data. Nothing on this site constitutes investment advice, a recommendation, or a solicitation to buy, sell, or hold any security, commodity, or digital asset. Past performance does not indicate future results. Trading carries a high risk of total capital loss.

© 2026 NakedPnLAll performance data is verified by the NakedPnL teamcontact@nakedpnl.com
NakedPnL
RegistryPricingHow It WorksCommunitySupport
NakedPnL/Guides/From Exchange API to Public Registry — The NakedPnL Data Pipeline
Verification guide

From Exchange API to Public Registry — The NakedPnL Data Pipeline

Walkthrough of the daily snapshot pipeline: cron schedule, venue adapters, canonicalization, hash chain writes, and Bitcoin anchoring via OpenTimestamps.

By NakedPnL Research·May 7, 2026·17 min read
TL;DR
  • The NakedPnL pipeline runs on a strict UTC schedule: 23:55 daily snapshot, 00:05 OTS submit, 00:30 OTS upgrade.
  • Each row in the public registry is the output of a deterministic chain: fetch raw response, canonicalize, contentHash, chainHash, persist.
  • A daily Merkle tree is built over every public entity's chain head and the root is submitted to OpenTimestamps calendar servers.
  • Once Bitcoin includes the calendar's commitment in a block, the OTS proof transitions PENDING to UPGRADED with btcBlockHeight populated.
  • Every step is reproducible from raw inputs. There is no proprietary state that could not be reconstructed by an external auditor.
On this page
  1. Pipeline at a glance
  2. Step 1 — The 23:55 UTC snapshot cron
  3. Step 2 — Fetch the raw venue response
  4. Step 3 — Compute NAV in USD with Decimal.js
  5. Step 4 — Canonicalize and compute contentHash
  6. Step 5 — Read the previous chain head, compute chainHash
  7. Step 6 — Build the daily Merkle tree at 00:05 UTC
  8. Step 7 — Submit the Merkle root to OpenTimestamps
  9. Step 8 — Upgrade to a Bitcoin-anchored proof at 00:30 UTC
  10. Step 9 — Public exposure
  11. End-to-end timing example
  12. Why each step is the way it is
  13. What an external auditor needs to reproduce a single day
  14. Frequently asked questions

Every row that appears on a NakedPnL trader profile is the output of a deterministic data pipeline. The pipeline begins with a venue API response and ends with a row in NavSnapshot, a leaf in a daily Merkle tree, and an OpenTimestamps receipt that anchors that day's chain heads to Bitcoin. This guide walks through every step in execution order, explains the choices that make each step independently verifiable, and shows the actual schedule the production system runs.

The design goal is that an external auditor — given only the published chain JSON, the public OTS bundle, and the venue-specific replay credentials — can reproduce every contentHash and chainHash from scratch and confirm every Bitcoin attestation. Nothing in the pipeline depends on proprietary state that NakedPnL alone holds.

Pipeline at a glance

UTC timeStepOutcome
23:55Daily snapshot cronOne NavSnapshot per active trader, contentHash + chainHash committed.
00:00Day-rollover boundarySnapshots written before this time count as the previous day.
00:05OTS anchor cronMerkle tree built over previous-day chain heads; root submitted to OpenTimestamps.
00:30OTS upgrade cronAttempt Bitcoin attestation upgrade for any PENDING proofs older than ~6 hours.
ContinuousHourly retry workersSnapshot retries for venue failures; OTS upgrade retries until success or 7-day failure cutoff.
Daily UTC schedule of the NakedPnL data pipeline. Each step is idempotent and replay-safe.

Step 1 — The 23:55 UTC snapshot cron

Vercel cron triggers app/api/cron/daily-snapshot at 23:55 UTC. The route iterates active CryptoTrader rows whose user.isPublic flag is true, partitions them by venue, and dispatches venue-specific worker tasks. Each worker decrypts the trader's credential, calls the appropriate venue adapter, and produces a single NavSnapshot record.

The cron is region-pinned in vercel.json. Binance-touching workers run from fra1 (Frankfurt) because Binance blocks US IPs; IBKR Flex Web Service workers run from the default Vercel region because IBKR has no geofence. Every worker runs with a 60-second timeout and exponential backoff. If a worker fails three times, the failure is recorded as a SnapshotFailure row and the trader is flagged for the STALE state machine described in the API key rotation guide.

// app/api/cron/daily-snapshot/route.ts (simplified)
export async function GET(req: Request) {
  await assertVercelCron(req);
  const traders = await prisma.cryptoTrader.findMany({
    where: { status: "ACTIVE", user: { isPublic: true } },
    include: { credential: true },
  });

  const results = await Promise.allSettled(
    traders.map((t) =>
      withRetry({ attempts: 3, backoffMs: [1000, 5000, 30000] }, () =>
        snapshotForTrader(t),
      ),
    ),
  );

  const failures = results.filter((r) => r.status === "rejected");
  if (failures.length > 0) await notifyOps({ failures, when: new Date() });
  return Response.json({ ok: true, snapshots: results.length });
}
Simplified daily-snapshot cron handler.

Step 2 — Fetch the raw venue response

The venue adapter is the only piece of code that knows venue-specific authentication and endpoint structure. There is one adapter per venue: lib/adapters/binance.ts, lib/adapters/bybit.ts, lib/adapters/okx.ts, lib/adapters/ibkr.ts, plus the prediction-market clients in lib/prediction-markets/. Each adapter exposes a uniform fetchBalance method that returns a venue-specific raw response object plus the request fingerprint (method, URL, signed nonce or timestamp).

The raw response is the unit of verification. Every contentHash in the registry is computed over a canonicalization of one of these objects. The adapter never strips fields, never reorders arrays, and never normalizes numeric formats. The only transformation is decoding the wire format (JSON for Binance/Bybit/OKX, XML for IBKR Flex, GraphQL response for Polymarket subgraph) into a canonical in-memory object.

// lib/adapters/binance.ts
export async function fetchBalance(cred: BinanceCredential) {
  const ts = Date.now();
  const query = `omitZeroBalances=true&recvWindow=5000&timestamp=${ts}`;
  const sig = hmacSha256(cred.secretKey, query);
  const url = `https://api.binance.com/api/v3/account?${query}&signature=${sig}`;

  const res = await fetch(url, {
    headers: { "X-MBX-APIKEY": cred.apiKey },
    cache: "no-store",
  });
  if (!res.ok) throw classifyBinanceError(res);

  const raw = await res.json();   // <-- the bytes that get hashed
  return {
    raw,
    fingerprint: { method: "GET", host: "api.binance.com", ts, recvWindow: 5000 },
  };
}
Binance adapter returns the raw account snapshot plus the request fingerprint.

Step 3 — Compute NAV in USD with Decimal.js

The raw response contains balances in venue-native units — BTC, USDT, USD, EUR, GBP, multiple coin lots, optionally derivative margin balances. The TWR engine in lib/calculation/twr-engine.ts converts these to a single USD-denominated NAV using the same midnight-UTC reference price snapshot for every trader on a given date. Reference prices come from a venue-independent source (Coinbase Prime spot for crypto, IBKR market-data for traditional FX/equities) so a single trader's USD NAV does not depend on the venue's own quote.

All arithmetic in this step uses Decimal.js, not JavaScript Number. Floating-point errors in NAV computation would create non-deterministic chain hashes — two runs of the cron from different machines could produce different bit patterns for the same input. Decimal.js gives byte-stable results across runtimes, which is required for the chain to reproduce on a verifier's laptop.

import Decimal from "decimal.js";
Decimal.set({ precision: 40, rounding: Decimal.ROUND_HALF_EVEN });

function deriveNavUsd(raw: BinanceAccount, fx: PriceMap): Decimal {
  const components = raw.balances.flatMap((b) => {
    const amount = new Decimal(b.free).plus(b.locked);
    if (amount.isZero()) return [];
    const price = fx[b.asset]; // already a Decimal in USD
    return [{ asset: b.asset, amount, usd: amount.times(price) }];
  });
  const navUsd = components.reduce(
    (sum, c) => sum.plus(c.usd),
    new Decimal(0),
  );
  return navUsd; // 40-digit precision, deterministic
}
Deterministic NAV computation with Decimal.js precision.

Step 4 — Canonicalize and compute contentHash

Canonicalization is the reason any two systems looking at the same raw response can compute the same contentHash. JavaScript's JSON.stringify does not guarantee key order, does not have a stable representation for floats, and does not specify whitespace. Direct hashing of JSON.stringify output would be non-reproducible.

The canonicalize function in lib/calculation/audit-hash.ts implements RFC 8785 (JSON Canonicalization Scheme): sort object keys lexicographically by code-point, format numbers with the JSON-canonical IEEE 754 form, escape strings minimally, and produce UTF-8 encoded bytes with no insignificant whitespace. The contentHash is SHA-256 of those bytes, hex-encoded.

// lib/calculation/audit-hash.ts (simplified)
import { createHash } from "node:crypto";

export function canonicalize(value: unknown): string {
  if (value === null || typeof value !== "object") {
    return JSON.stringify(value);
  }
  if (Array.isArray(value)) {
    return "[" + value.map(canonicalize).join(",") + "]";
  }
  const keys = Object.keys(value as object).sort();
  return (
    "{" +
    keys
      .map((k) => JSON.stringify(k) + ":" + canonicalize((value as any)[k]))
      .join(",") +
    "}"
  );
}

export function contentHash(raw: unknown): string {
  const canonical = canonicalize(raw);
  return createHash("sha256").update(canonical, "utf8").digest("hex");
}
Canonicalization and content hashing.

Step 5 — Read the previous chain head, compute chainHash

The previous chain head is the most recent NavSnapshot row for the same trader, ordered by sequence descending. The cron reads it inside the same Postgres transaction that will write the new row, with the trader's chain locked at SERIALIZABLE isolation. This serialization is necessary because two simultaneous cron retries could otherwise both read the same head and both attempt to write sequence N+1 — exactly the race condition the founding-seat allocator solves at the venue-tier level.

If no previous row exists the trader is at genesis. The previous chain hash is the literal ASCII string 'genesis'. The chainHash for the new row is SHA-256 of (previousChainHash || contentHash) where || is byte concatenation.

await prisma.$transaction(async (tx) => {
  const previous = await tx.navSnapshot.findFirst({
    where: { traderId: trader.id },
    orderBy: { sequence: "desc" },
    select: { sequence: true, chainHash: true },
  });

  const previousChainHash = previous?.chainHash ?? "genesis";
  const sequence = (previous?.sequence ?? -1) + 1;
  const chainHash = sha256(previousChainHash + contentHash);

  await tx.navSnapshot.create({
    data: {
      traderId: trader.id,
      sequence,
      snapshotDate: when,
      navUsd: navUsd.toFixed(8),
      contentHash,
      chainHash,
      previousHash: previousChainHash,
      rawResponseDigest: sha256(JSON.stringify(raw.fingerprint)),
      credentialFingerprint: cred.keyFingerprint,
    },
  });
}, { isolationLevel: "Serializable" });
Atomic write of the next chain row.

Step 6 — Build the daily Merkle tree at 00:05 UTC

Once the previous calendar day is closed, the cron at app/api/cron/ots-anchor reads the chainHash of every public entity's most recent snapshot from that day. The leaf set is sorted by traderId for determinism. The Merkle tree is binary, balanced (with duplicated last leaf if odd count), and uses SHA-256 as the inner hash function. The root is a single 32-byte hash that commits to every chain head as it existed at the day boundary.

// lib/ots/merkle.ts (simplified)
function pairwiseHash(layer: string[]): string[] {
  const out: string[] = [];
  for (let i = 0; i < layer.length; i += 2) {
    const left = layer[i];
    const right = layer[i + 1] ?? left; // duplicate last if odd
    out.push(sha256(left + right));
  }
  return out;
}

export function merkleRoot(leaves: string[]): string {
  if (leaves.length === 0) throw new Error("empty leaf set");
  let layer = [...leaves].sort(); // deterministic order
  while (layer.length > 1) layer = pairwiseHash(layer);
  return layer[0];
}
Merkle root from a sorted list of leaves.

Step 7 — Submit the Merkle root to OpenTimestamps

The Merkle root is a 32-byte hash. lib/ots/anchor.ts submits it to multiple OpenTimestamps calendar servers (alice.btc.calendar.opentimestamps.org, bob.btc.calendar.opentimestamps.org, finney.calendar.eternitywall.com). Each calendar acknowledges the submission and returns a partial proof. NakedPnL persists all partial proofs as an OpenTimestampsAnchor row with status=PENDING, dailyMerkleRoot, and the array of calendar server URLs and partial proofs.

PENDING means the calendar has accepted the commitment but Bitcoin has not yet confirmed it. Calendars batch submissions and write to Bitcoin every few hours. The partial proof in this state contains the calendar's local commitment but no Bitcoin block reference yet.

Step 8 — Upgrade to a Bitcoin-anchored proof at 00:30 UTC

The 00:30 UTC cron at app/api/cron/ots-upgrade iterates every PENDING anchor and asks each calendar server whether the commitment has been included in a Bitcoin block yet. When at least one calendar returns a complete proof — a Merkle path from the partial commitment down through the calendar's aggregation tree, into a Bitcoin transaction, into a Bitcoin block header — the anchor row is updated with status=UPGRADED, btcBlockHeight, btcBlockHash, and the serialized full proof.

Bitcoin attestations are not instant. Calendars typically write to Bitcoin every one to six hours, and the resulting transactions then need a few confirmations before the calendar exposes the upgraded proof. In practice the median time from PENDING to UPGRADED is about four hours; the 95th percentile is around twelve hours. The cron retries every hour for seven days. After seven days a still-PENDING anchor is marked FAILED and the date is re-anchored with a new Merkle root submission to a fresh batch of calendars.

Once UPGRADED, the date is anchored forever
An UPGRADED proof contains a complete Merkle path from the daily root down to a specific Bitcoin block header. Anyone holding the proof and a Bitcoin node (or a public block explorer) can verify that the daily Merkle root existed before that block was mined. This evidence does not depend on NakedPnL continuing to operate — the proof is self-contained.

Step 9 — Public exposure

Two endpoints make the pipeline output publicly verifiable. /verify/chain/[handle].json returns the full ordered chain for a single trader; the verification tutorial elsewhere on this site walks through how to consume it. /api/verify/[date] returns the daily anchor bundle: Merkle root, status (PENDING / UPGRADED / FAILED), Bitcoin block height when upgraded, the array of public entity heads covered, and the serialized .ots receipt.

The receipt encoded in the response is the same .ots file an OpenTimestamps client expects. A verifier can pipe it through the official ots verify command, point to a Bitcoin node, and confirm the attestation entirely independently of NakedPnL. There is no proprietary endpoint required.

End-to-end timing example

Time (UTC)EventState
2026-05-06 23:55:02Cron fetches Binance balance for trader cpx-traderRaw response collected
2026-05-06 23:55:03Canonicalize, compute contentHash9f4c...e2a1
2026-05-06 23:55:04Read previous chain head, compute chainHash1aa9...b7d0
2026-05-06 23:55:04INSERT NavSnapshot at sequence 365Row committed
2026-05-07 00:05:18Build daily Merkle tree over 1,247 entity headsRoot 5fe1...c9b2
2026-05-07 00:05:21Submit root to 3 OpenTimestamps calendarsAnchor row PENDING
2026-05-07 04:33:42ots-upgrade cron retrieves complete proofAnchor row UPGRADED, btcBlockHeight=893421
2026-05-07 04:34:00Public /api/verify/2026-05-06 returns UPGRADED bundleVerifiable forever
Calendar trace of a single trader's snapshot from collection to Bitcoin attestation.

Why each step is the way it is

  • Decimal.js precision: floating-point arithmetic is not deterministic across runtimes; the chain hashes must reproduce bit-for-bit on a verifier's laptop.
  • RFC 8785 canonicalization: JSON has no canonical form by default; without RFC 8785 two correct serializations of the same object can produce different hashes.
  • Serializable Postgres isolation: two concurrent cron retries could otherwise both attempt to write sequence N+1, breaking chain ordering.
  • Merkle tree per day rather than per-trader: a single OpenTimestamps submission per day amortizes the Bitcoin attestation cost across all public entities at no loss of provability.
  • Multiple OpenTimestamps calendars: redundancy. If one calendar disappears, the proof is still upgradeable through the others.
  • Seven-day failure window with re-anchor: balances forensic patience (Bitcoin can be slow during mempool congestion) with a hard upper bound on how long a date can stay un-anchored.
Verification, not investment recommendation
The pipeline produces verified data. It does not characterize risk, predict future returns, or substitute for a regulated adviser. NakedPnL is a publisher; the registry is a due-diligence input alongside operational, regulatory, and counterparty diligence.

What an external auditor needs to reproduce a single day

  1. The trader's read-only API credential (or a snapshot of the raw venue response from that date — the trader's choice of evidence).
  2. The venue adapter source code (open in lib/adapters/* and lib/prediction-markets/*).
  3. The canonicalization function (open in lib/calculation/audit-hash.ts).
  4. The previous chain hash for that trader (read from /verify/chain/[handle].json).
  5. The .ots receipt for the relevant date (read from /api/verify/[date]).

With those five inputs an auditor can re-run steps 2 through 5, recompute the contentHash and chainHash, and confirm the OTS attestation through any standard OpenTimestamps client. No piece of the pipeline depends on NakedPnL-only state.

Frequently asked questions

What happens if a venue API is down at 23:55 UTC?
The snapshot worker retries with exponential backoff up to three times. If all three retries fail, a SnapshotFailure row is recorded and the snapshot is rolled forward by hourly retry workers. After 72 hours of consecutive failures the trader transitions to STALE state and the registry rank is suspended. No synthetic NAV is ever written.
Why are there two crons (00:05 anchor and 00:30 upgrade) instead of one?
OpenTimestamps calendars batch submissions to Bitcoin. A submission at 00:05 UTC is not yet anchored to a block — the calendar holds it for a few hours before writing to Bitcoin. The upgrade cron polls calendars to retrieve the complete proof once Bitcoin confirms it. Splitting the two steps lets the system anchor immediately at day rollover and upgrade asynchronously when Bitcoin is ready.
Does NakedPnL run its own Bitcoin node to verify upgrades?
Yes. The OTS upgrade cron cross-checks the Bitcoin block referenced in each calendar's proof against an internal Bitcoin Core node. If the calendar's claimed block header does not match the node's view, the upgrade is rejected and the proof stays PENDING. Verifiers running their own ots verify command perform the same check independently.
What if two cron instances run simultaneously by mistake?
Each snapshot write happens inside a SERIALIZABLE Postgres transaction that locks the trader's chain head. The losing transaction sees a serialization error and aborts. At the OTS submission layer, the daily Merkle root is keyed by snapshot date, and the upsert is idempotent — two simultaneous submissions produce one anchor row, not two.
How does the pipeline handle a venue's response format changing?
The venue adapter is the single point of contact with venue-specific format. When a venue ships a breaking change (Binance moving from HMAC to Ed25519, for example), only the adapter changes. Existing chain rows are unaffected because each row's contentHash was computed over the response shape that existed at the time of writing.
Is the pipeline open source?
The cryptographic core — the canonicalization rules in lib/calculation/audit-hash.ts and the Merkle/OTS code in lib/ots/* — is documented in detail at /docs/verification with reference Python and JavaScript implementations. An external party can build a complete independent verifier from that documentation alone, which is the correct definition of openness for a verification system.
What if OpenTimestamps shut down tomorrow?
Existing UPGRADED proofs already contain the full Merkle path to a Bitcoin block header. They are self-contained and remain verifiable as long as the Bitcoin ledger exists. NakedPnL would re-implement the calendar interaction layer against a successor service for new dates, but no historical date would lose its attestation.

References

  • RFC 8785 — JSON Canonicalization Scheme (JCS)
  • OpenTimestamps protocol specification
  • Bitcoin Core — block header verification
  • Decimal.js — arbitrary-precision arithmetic for JavaScript
  • NakedPnL verification algorithm reference
NakedPnL is a publisher of verified investment performance data. We are not an investment adviser, broker, dealer, or asset manager, and nothing on this page constitutes investment advice or a recommendation. See the compliance page for our full regulatory posture.