Building High-Performance Solana Trading Infrastructure in Rust (Jupiter, Jito, Helius gRPC)

·6 min read

Building High-Performance Solana Trading Infrastructure in Rust

Most "Solana trading bot" articles stop at "call Jupiter". In production, that is not where the interesting problems are. The real work is latency, inclusion, risk, and observability - turning a working script into trading infrastructure that can survive mainnet, MEV, and your own mistakes.

This guide is how I build Solana trading infrastructure in Rust: the anatomy of a serious setup, why Jupiter + Jito + Helius is the default stack, and what a same-block sniper / arbitrage / copy-trading system actually looks like once it is running.

What "trading infrastructure" means on Solana

A production Solana trading system is almost never a single process. It is:

Your performance ceiling is set by the slowest of those components, which is why "just rewrite it in Rust" is not enough. You rewrite in Rust so the hot path is predictable - then you engineer the other components so they never force the hot path to wait.

The stack I default to

Layer Default Why
Language Rust (Tokio) Predictable latency, zero-cost async, solid crates
Data feed Helius gRPC / Yellowstone Lowest-latency account + tx streams I've measured
Routing Jupiter aggregator v6 Best-effort multi-DEX routing with clean on-chain program
Inclusion Jito bundles (+ Nozomi as fallback) Atomic inclusion + priority in a congested slot
Hot state Redis Cheap, fast, easy to share across processes
Durable state PostgreSQL Strategy PnL, fills, audit trail
Orchestration Node.js / TypeScript Dashboards and human-in-the-loop controls

TypeScript is fine for orchestration but a terrible idea for the hot path. Put the decision loop and the signer in Rust.

Architecture sketch

Helius gRPC / shredstream  ──►  Ingest (Rust)  ──►  Decoder  ──►  Strategy
                                                                   │
                                                                   ▼
                                                           Risk engine
                                                                   │
                                                                   ▼
                                                   Jupiter route builder
                                                                   │
                                                                   ▼
                                            Jito bundler ──► Jito relayer
                                                                   │
                                                                   ▼
                                                             Mainnet slot N

Every arrow is potentially a latency hotspot. The way you earn alpha is by shaving microseconds off each arrow and keeping them independent, so ingest backpressure never blocks execution.

Ingest: Helius gRPC done right

use futures::StreamExt;
use yellowstone_grpc_client::GeyserGrpcClient;
use yellowstone_grpc_proto::prelude::{
    subscribe_request_filter_transactions::*, SubscribeRequest,
    SubscribeRequestFilterTransactions,
};
 
pub async fn run_ingest(endpoint: &str, token: &str) -> anyhow::Result<()> {
    let mut client = GeyserGrpcClient::build_from_shared(endpoint.to_string())?
        .x_token(Some(token.to_string()))?
        .connect()
        .await?;
 
    let mut req = SubscribeRequest::default();
    req.transactions.insert(
        "dex_swaps".into(),
        SubscribeRequestFilterTransactions {
            account_include: vec![/* DEX program ids */],
            vote: Some(false),
            failed: Some(false),
            ..Default::default()
        },
    );
 
    let (mut _write, mut read) = client.subscribe_with_request(Some(req)).await?;
 
    while let Some(msg) = read.next().await {
        let update = msg?;
        // Decode, dispatch to strategy channel. Keep this loop free of I/O.
    }
    Ok(())
}

Two rules I follow:

  1. Never block in the ingest task. Push raw messages onto a bounded tokio::sync::mpsc channel; let a pool of decoder workers do the CPU work.
  2. Drop on backpressure, don't buffer. If you can't keep up with the feed, you are not in this trade anyway. Drop and alert.

Routing: Jupiter v6, but honest about it

Jupiter gives you the best public route. In a congested slot, the best public route is the route everyone is taking. For competitive strategies, I still use Jupiter as the baseline and override with a custom route if my own DEX graph finds better pricing (mostly for arb).

The key detail people miss: pre-build the transaction, then only sign at the last possible moment. Building the route, fetching blockhash, and serializing the tx add tens of milliseconds you can't afford to pay inside the decision loop.

struct PreparedSwap {
    serialized_message: Vec<u8>,
    expected_out: u64,
    min_out: u64,
}
 
async fn prepare_swap(/* ... */) -> anyhow::Result<PreparedSwap> {
    // Call Jupiter, pick route, build VersionedTransaction, serialize message.
    // Do NOT sign here.
    todo!()
}
 
async fn on_signal(prepared: &PreparedSwap, signer: &Keypair) -> anyhow::Result<()> {
    let tx = sign_serialized_message(&prepared.serialized_message, signer)?;
    submit_via_jito(tx).await
}

Inclusion: Jito bundles and why you need them

On mainnet, "first to submit" and "first to land" are not the same thing. Jito gives you:

The tip is a cost-of-doing-business. The right number is strategy-dependent: for migration snipers, you pay a lot; for boring arb, you pay just enough to beat the average public tip.

I always keep Nozomi as a fallback path. If a Jito region is degraded (it happens), Nozomi's relayers let the strategy keep trading at a slightly lower inclusion rate instead of going dark.

Strategy anatomy: three classic shapes

1. MEV / cross-DEX arbitrage

2. Migration sniper

3. Copy trading

Risk: the thing that saves the strategy from you

Every strategy I ship has at least these risk knobs, enforced in the execution layer, not just the strategy:

pub struct RiskConfig {
    pub max_position_per_market_lamports: u64,
    pub max_daily_loss_lamports: u64,
    pub max_slippage_bps: u16,
    pub kill_switch: std::sync::atomic::AtomicBool,
}
 
pub fn check_or_reject(
    cfg: &RiskConfig,
    trade: &ProposedTrade,
    state: &Portfolio,
) -> Result<(), RiskError> {
    if cfg.kill_switch.load(Ordering::Relaxed) {
        return Err(RiskError::KillSwitchEngaged);
    }
    if state.daily_pnl_lamports < -(cfg.max_daily_loss_lamports as i64) {
        return Err(RiskError::DailyLossExceeded);
    }
    if state.exposure(&trade.market) + trade.size > cfg.max_position_per_market_lamports {
        return Err(RiskError::MarketExposureExceeded);
    }
    if trade.slippage_bps > cfg.max_slippage_bps {
        return Err(RiskError::SlippageTooHigh);
    }
    Ok(())
}

The kill-switch has saved me more than once. Wire it to a CLI, a dashboard button, and ideally an automated alert-triggered flip for extreme market conditions.

Observability: what I actually alert on

Alerts go to a single place (I use a Telegram channel with structured messages). Noisy alerts are worse than no alerts.

Deployment

Nothing exotic: Docker + systemd or Kubernetes, whichever you already know. Two operational rules:

  1. Treat signers like prod secrets. KMS/Turnkey where possible; never paste a private key into an env var in a shared shell.
  2. One binary per strategy. Shared config, independent process lifecycles. A crash in one strategy must not take the others down.

Conclusion

Building Solana trading infrastructure is not "how do I call Jupiter?". It is:

Get those right and the rest is strategy - the part you actually want to be thinking about.

Resources

Related posts