Architecture Overview

This is the high-level system map of TraderTape. It explains what the major components are, how they communicate, and where data lives.

This document is for users who want to understand what's going on under the hood โ€” not for building your own version. If you're looking for deployment instructions, refer to the quickstart guides.

Components

TraderTape has four main components:

  1. The cloud backend โ€” a FastAPI service hosting the database, REST API, scanner, multi-broker proxy, ticker, and Telegram bot. Always-on, single source of truth for strategies, signals, and audit history.
  2. The cloud frontend โ€” a Next.js application serving the UI. Talks to the backend via REST. Renders dashboards with per-broker tabs, the strategy editor, the backtest UI, and the docs site. Also holds the Browser Login flow where broker API secrets are hashed locally via Web Crypto API.
  3. The local agent โ€” an optional Python program for advanced users who need trade data obfuscation, headless operation, or SEBI-compliant immediate orders. Most users don't need it.
  4. The Telegram bot โ€” handled by the cloud backend, exposes a phone-friendly interface for approving signals, checking positions, and re-logging into brokers.

Multi-broker support

The cloud backend supports three brokers: Zerodha Kite, Upstox, and Groww โ€” all via Browser Login. The browser holds the API secret and computes a one-way checksum; the cloud forwards it to the broker and stores only the resulting short-lived access token (~14 hours).

The local agent is optional for all three brokers. No install is required for basic use.

Data flow: a typical scan

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                      MARKET CLOSE                          โ”‚
โ”‚                      (15:30 IST)                           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ”‚
                          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Cloud backend: scheduled scan job (15:35 IST)             โ”‚
โ”‚    1. Refresh daily candles (Kite or Yahoo)                โ”‚
โ”‚    2. Re-validate approved GTTs                            โ”‚
โ”‚    3. Run DSL strategy on every active portfolio           โ”‚
โ”‚    4. Generate entry / exit / addon signals                โ”‚
โ”‚    5. Score signals by conviction                          โ”‚
โ”‚    6. For paper portfolios: fill virtually, mark executed  โ”‚
โ”‚    7. For live + auto-place: queue for GTT placement       โ”‚
โ”‚    8. For live + manual: leave in pending_approval         โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ”‚
                          โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                          โ–ผ                                  โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Telegram bot                   โ”‚    โ”‚  Frontend dashboard             โ”‚
โ”‚  Notify user of new signals     โ”‚    โ”‚  Render pending signals         โ”‚
โ”‚  Inline approve/reject buttons  โ”‚    โ”‚  User reviews, approves, places โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ”‚                                  โ”‚
                          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  GTT placement                                             โ”‚
โ”‚    Cloud โ†’ Kite API directly (Path A)                      โ”‚
โ”‚    OR                                                      โ”‚
โ”‚    Cloud queues signal โ†’ Local agent polls โ†’               โ”‚
โ”‚    Agent places GTT on broker (Path B)                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ”‚
                          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Fill monitor (every 60s during market hours)              โ”‚
โ”‚    1. Check pending GTTs                                   โ”‚
โ”‚    2. Detect triggered โ†’ fetch resulting order             โ”‚
โ”‚    3. On COMPLETE: mark position open, place stop GTT      โ”‚
โ”‚    4. Audit log + Telegram notification                    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

The flow is the same for entries, exits, and add-ons โ€” just with different signal types and order sides.

Backend subsystems

The cloud backend is built around several major subsystems:

REST API surface

  • Auth โ€” magic link / OTP / password login, broker credential storage, session management
  • Model trading โ€” portfolio CRUD, signals, positions, scan trigger, dashboard data
  • Strategies โ€” user strategy CRUD, fork, share, version
  • Backtest โ€” backtest execution, history, comparison
  • Analysis โ€” historical P&L, FIFO pairing, corporate actions
  • Uploads โ€” tradebook, ledger, CAS PDF parsing
  • Portfolio โ€” current holdings/positions
  • Mutual funds โ€” folios, schemes, transactions, NAVs
  • Risk โ€” rule definitions and evaluation
  • Family โ€” account linking
  • Agent โ€” local agent registration, credential exchange, signal polling
  • WebSocket โ€” live ticker streaming

Internal services

  • Scanner โ€” entry/exit/addon scan loop, conviction scoring, capital allocation
  • DSL interpreter โ€” rule evaluation, indicator computation
  • Strategy converters โ€” built-in model definitions (V0โ€“V4) expressed as DSL
  • Backtest engine โ€” day-by-day event loop simulator
  • Paper executor โ€” virtual fills for paper portfolios
  • Broker clients โ€” wrappers around Kite/Upstox/Groww SDKs with per-user session isolation
  • Ticker โ€” singleton WebSocket consumer broadcasting ticks to connected frontends
  • Telegram bot โ€” command handlers and notification sending
  • Risk engine โ€” order pre-checks against active risk rules
  • Indicators โ€” RSI, SMA, EMA, ATR, Bollinger Bands, MACD, etc.
  • Tradebook / ledger / CAS parsers โ€” file ingestion
  • Reconciliation โ€” drift detection between cloud expectations and broker reality

Data model

The cloud database stores three logical kinds of data:

  • Per-user data โ€” user accounts, broker credentials, deployed portfolios, generated signals, open and closed positions, custom strategies, backtest history, risk rules, watchlists, mutual fund folios, family group links, and the audit log of every state change.
  • Shared market data โ€” instruments, daily/intraday candles, historical price backups, corporate actions (splits, mergers, demergers, buybacks). All users read from the same copy.
  • Operational state โ€” session tokens, magic link tokens, scan locks, queue state.

Per-user data is strictly scoped: every query filters by the authenticated user's ID before returning results. Shared market data is global by design โ€” there's no point storing one copy of NIFTY 50 candles per user.

Broker credentials, when stored on the cloud at all, are encrypted at rest with a symmetric key held only in the backend's environment.

Frontend modules

The frontend is a Next.js 14 app using the App Router.

Pages (routes)

  • / โ€” landing page (marketing)
  • /dashboard โ€” main user dashboard
  • /model โ€” model portfolios and signals
  • /strategies โ€” strategy builder and library
  • /backtest โ€” backtest runner and history
  • /analysis, /edge, /patterns, /trend, /benchmark, /campaigns โ€” historical analysis pages
  • /portfolio โ€” holdings and positions
  • /positions, /orders โ€” live broker data
  • /option-chain โ€” live option chains
  • /settings โ€” broker, telegram, risk, agent, family
  • /admin โ€” admin pages (user management, system status)
  • /auth/login, /auth/verify โ€” magic link auth flow
  • /docs โ€” public documentation site (this doc lives there)

Components

  • PrivacyContext โ€” wraps the app, provides display obfuscation
  • OrderModalContext โ€” opens the order placement modal from anywhere
  • HoldingsTable, PositionsTable, PnLCard โ€” dashboard widgets
  • AuthBanner โ€” top-of-page banner showing broker connection status
  • Navbar, FamilySwitcher โ€” navigation
  • ConnectionNeeded โ€” empty state for pages that require a broker
  • RiskRuleBuilder โ€” visual editor for risk rules

State management

SWR for server state caching, React Context for global UI state (privacy mode, family selection, order modal). No Redux, no Zustand โ€” the app's state needs are simple enough that hooks suffice.

Communication patterns

  • Frontend โ†” Backend: REST + JSON. Authentication via HTTP-only session cookie. CORS configured for the deployment URL.
  • Backend โ†” Kite/Upstox/Groww: REST with broker SDKs (kiteconnect, urllib for Upstox/Groww). Per-user session caching to avoid re-authentication.
  • Backend โ†” Telegram: Long polling via the python-telegram-bot library.
  • Backend โ†” Frontend (real-time prices): WebSocket on /api/ws/ticks. The backend multiplexes the broker's WebSocket to all connected clients.
  • Local agent โ†” Backend: REST polling. Agent polls /api/agent/pending-signals every 30s, pushes results back via /api/agent/signal-result.
  • Local agent โ†” Broker: Same SDKs as the backend, but running on the user's machine.

Authentication and authorization

User authentication:

  1. Magic link (primary) โ€” email a one-click verification link. 15-minute TTL, single-use.
  2. OTP (fallback) โ€” for users whose email scanners pre-fetch the magic link (which would invalidate it). The OTP is derived deterministically from the same token, so even a "consumed" magic link can complete the flow via OTP.
  3. Password (optional) โ€” set a password in Settings โ†’ Account.

After auth, the backend issues a session token (7-day TTL) and sets it as an HTTP-only secure cookie. Every API request reads this cookie at the request boundary to identify the user.

Authorization:

  • Per-user data isolation โ€” every query filters by the authenticated user's identity at the request boundary. There is no codepath that returns one user's data to another.
  • Admin role โ€” admin users can access user-management endpoints, view system stats, and grant cloud credential storage to specific users.
  • Tier-based limits โ€” your tier (free / pro / admin) determines strategy, backtest, and position quotas.

Broker authentication:

  • Kite OAuth โ€” browser flow with per-user callback token routing. See Brokers.
  • Upstox OAuth โ€” browser flow on the local agent only.
  • Groww checksum auth โ€” no browser, runs entirely on the local agent.

The model scanner in detail

The scanner is the most operationally important subsystem. It runs:

  • Once a day automatically at ~15:35 IST as a background task
  • On demand via POST /api/model/scan?portfolio_id=N

For each active portfolio:

  1. Load the strategy (built-in or user DSL)
  2. Build the universe (NIFTY 100 by default, configurable)
  3. For each open position, evaluate exit rules โ†’ generate exit signals
  4. For each open position, evaluate addon rules โ†’ generate addon signals
  5. For each universe symbol, evaluate entry rules โ†’ generate entry signals
  6. Score new entry signals
  7. For paper portfolios, immediately fill via paper_executor.paper_fill_signal
  8. For live portfolios with auto_place_signals = True, queue for GTT placement
  9. For live portfolios with manual approval, leave in pending_approval status
  10. Send Telegram notifications

The scanner is the entry point for everything. If you're debugging a missing signal, start here.

The fill monitor

Runs every 60 seconds during market hours. For each user with active portfolios:

  1. Fetch all GTTs from the broker
  2. Fetch all orders from the broker
  3. Check pending entry positions: if their GTT triggered, find the resulting order, on COMPLETE mark the position open and place the stop GTT
  4. Check open positions: if their stop GTT or exit GTT triggered, close the position
  5. Check pending exit signals: if their GTT triggered, close the position
  6. Audit and notify

The monitor is what bridges "the cloud thinks the GTT is placed" to "the position is actually open in the database".

The ticker singleton

A single upstream connection to Kite's streaming endpoint feeds all connected frontends. The singleton multiplexes ticks based on which symbols each client has subscribed to.

If the admin user has a paid Kite plan with WebSocket access, the admin's session feeds the ticker by default. Free-plan users can still consume from the singleton โ€” it's just shared market data, not account-specific.

If no admin session is available, the ticker falls back to the requesting user's session (paid plan required for that to work). If neither is available, the dashboard falls back to REST polling for LTP every 10-15 seconds.

The local agent in summary

The local agent is conceptually a distributed executor for the cloud's signal generator. It:

  • Logs into the broker on the user's machine (cloud never sees the credentials)
  • Polls the cloud for signals belonging to the user's portfolios
  • Places GTTs / orders against the broker
  • Reports results back to the cloud
  • Reconciles broker holdings against cloud expectations every poll cycle
  • Hosts a local web UI for manual control

This decoupling lets the cloud focus on strategy + decision-making while keeping execution on a machine the user controls. The privacy story and the SEBI compliance story both rely on this split.

Why so many things are split out

Why the cloud handles strategy and the agent handles execution. Strategy logic (DSL evaluation, conviction scoring, backtest engine) is computationally cheap and benefits from being centralized โ€” one source of truth for every user. Execution logic (broker authentication, order placement, fill monitoring) needs to be on a machine with stable IP and direct broker access for SEBI compliance and privacy reasons.

Why the ticker is a singleton. Kite's WebSocket connection limit is per-session. If every user opened their own ticker, the API would rate-limit. One singleton fed by an admin session serves everyone with no rate-limit pressure.

Why models are stored as DSL JSON instead of code. Users can edit, fork, share, and version strategies without touching Python. The DSL interpreter is small enough that adding rule types is cheap.

Why backtests use a day-by-day event loop instead of vectorized math. The event loop is naturally side-effect-aware (capital allocation, sector caps, position counts, stop GTTs) in a way that vectorized backtesters struggle to express. The cost is speed (~5 seconds for 6 years of NIFTY 100), but that's fast enough.

Why per-user data is in a single shared database instead of one database per user. Cross-user transactions (family group queries) are easy, there's exactly one source of truth to back up, and the workload is small enough that the operational overhead of per-tenant databases would dwarf the benefit.

Limitations and trade-offs

  • No streaming for Upstox/Groww from the cloud. Both brokers have streaming APIs but the cloud doesn't currently consume them. Cloud-side multi-broker streaming is on the roadmap.
  • EQ-only backtesting. The backtest engine doesn't model F&O, options, or commodities. It's a deliberate scope decision.
  • Single-region deployment. The cloud runs in one region in India. Latency from outside the region is higher, but the workload is mostly daily-batch so this rarely matters in practice.

Next