Building High-Performance Solana Trading Infrastructure in Rust
Most "Solana trading bot" articles stop at "call Jupiter". In production, that is not where the interesting problems are. The real work is latency, inclusion, risk, and observability - turning a working script into trading infrastructure that can survive mainnet, MEV, and your own mistakes.
This guide is how I build Solana trading infrastructure in Rust: the anatomy of a serious setup, why Jupiter + Jito + Helius is the default stack, and what a same-block sniper / arbitrage / copy-trading system actually looks like once it is running.
What "trading infrastructure" means on Solana
A production Solana trading system is almost never a single process. It is:
- Market data - real-time account and transaction feeds via Helius gRPC or Yellowstone geyser, plus a pricing layer on top.
- Strategy - signal generation and sizing. For arbitrage, the DEX graph; for snipers, new-pool detection; for copy-trading, target-wallet fan-out.
- Execution - route building via Jupiter aggregator v6, bundling via Jito (and sometimes Nozomi), with a signer that is hot and rate-aware.
- Risk - exposure limits, slippage caps, kill-switches, daily loss caps.
- Observability - structured logs, metrics, alerts on inclusion rate and p99 latency.
- Operator layer - admin UI, wallet management, deployment automation.
Your performance ceiling is set by the slowest of those components, which is why "just rewrite it in Rust" is not enough. You rewrite in Rust so the hot path is predictable - then you engineer the other components so they never force the hot path to wait.
The stack I default to
| Layer | Default | Why |
|---|---|---|
| Language | Rust (Tokio) | Predictable latency, zero-cost async, solid crates |
| Data feed | Helius gRPC / Yellowstone | Lowest-latency account + tx streams I've measured |
| Routing | Jupiter aggregator v6 | Best-effort multi-DEX routing with clean on-chain program |
| Inclusion | Jito bundles (+ Nozomi as fallback) | Atomic inclusion + priority in a congested slot |
| Hot state | Redis | Cheap, fast, easy to share across processes |
| Durable state | PostgreSQL | Strategy PnL, fills, audit trail |
| Orchestration | Node.js / TypeScript | Dashboards and human-in-the-loop controls |
TypeScript is fine for orchestration but a terrible idea for the hot path. Put the decision loop and the signer in Rust.
Architecture sketch
Helius gRPC / shredstream ──► Ingest (Rust) ──► Decoder ──► Strategy
│
▼
Risk engine
│
▼
Jupiter route builder
│
▼
Jito bundler ──► Jito relayer
│
▼
Mainnet slot N
Every arrow is potentially a latency hotspot. The way you earn alpha is by shaving microseconds off each arrow and keeping them independent, so ingest backpressure never blocks execution.
Ingest: Helius gRPC done right
use futures::StreamExt;
use yellowstone_grpc_client::GeyserGrpcClient;
use yellowstone_grpc_proto::prelude::{
subscribe_request_filter_transactions::*, SubscribeRequest,
SubscribeRequestFilterTransactions,
};
pub async fn run_ingest(endpoint: &str, token: &str) -> anyhow::Result<()> {
let mut client = GeyserGrpcClient::build_from_shared(endpoint.to_string())?
.x_token(Some(token.to_string()))?
.connect()
.await?;
let mut req = SubscribeRequest::default();
req.transactions.insert(
"dex_swaps".into(),
SubscribeRequestFilterTransactions {
account_include: vec![/* DEX program ids */],
vote: Some(false),
failed: Some(false),
..Default::default()
},
);
let (mut _write, mut read) = client.subscribe_with_request(Some(req)).await?;
while let Some(msg) = read.next().await {
let update = msg?;
// Decode, dispatch to strategy channel. Keep this loop free of I/O.
}
Ok(())
}Two rules I follow:
- Never block in the ingest task. Push raw messages onto a bounded
tokio::sync::mpscchannel; let a pool of decoder workers do the CPU work. - Drop on backpressure, don't buffer. If you can't keep up with the feed, you are not in this trade anyway. Drop and alert.
Routing: Jupiter v6, but honest about it
Jupiter gives you the best public route. In a congested slot, the best public route is the route everyone is taking. For competitive strategies, I still use Jupiter as the baseline and override with a custom route if my own DEX graph finds better pricing (mostly for arb).
The key detail people miss: pre-build the transaction, then only sign at the last possible moment. Building the route, fetching blockhash, and serializing the tx add tens of milliseconds you can't afford to pay inside the decision loop.
struct PreparedSwap {
serialized_message: Vec<u8>,
expected_out: u64,
min_out: u64,
}
async fn prepare_swap(/* ... */) -> anyhow::Result<PreparedSwap> {
// Call Jupiter, pick route, build VersionedTransaction, serialize message.
// Do NOT sign here.
todo!()
}
async fn on_signal(prepared: &PreparedSwap, signer: &Keypair) -> anyhow::Result<()> {
let tx = sign_serialized_message(&prepared.serialized_message, signer)?;
submit_via_jito(tx).await
}Inclusion: Jito bundles and why you need them
On mainnet, "first to submit" and "first to land" are not the same thing. Jito gives you:
- Atomic inclusion across multiple transactions (critical for sandwich-free arb legs and for migration snipers that must co-land with a liquidity add).
- Priority via tips.
The tip is a cost-of-doing-business. The right number is strategy-dependent: for migration snipers, you pay a lot; for boring arb, you pay just enough to beat the average public tip.
I always keep Nozomi as a fallback path. If a Jito region is degraded (it happens), Nozomi's relayers let the strategy keep trading at a slightly lower inclusion rate instead of going dark.
Strategy anatomy: three classic shapes
1. MEV / cross-DEX arbitrage
- Subscribe to swap updates on the DEXs in your graph.
- Re-evaluate marginal prices on relevant pools every update.
- When a cycle has positive edge above thresholds, fire a pre-built bundle.
- Attribute PnL per-pool so you can prune dead pairs.
2. Migration sniper
- Watch for liquidity-add transactions on the launchpad (Pump.fun, Four.meme, etc.).
- Co-bundle your buy with the liquidity-add.
- Include rug/honeypot checks on the program + authority before even queuing the bundle.
3. Copy trading
- Subscribe to account updates for your target wallets.
- Decode their swap, resize to your risk budget, re-route via Jupiter.
- Respect per-target exposure limits so one bad guru doesn't blow the account.
Risk: the thing that saves the strategy from you
Every strategy I ship has at least these risk knobs, enforced in the execution layer, not just the strategy:
pub struct RiskConfig {
pub max_position_per_market_lamports: u64,
pub max_daily_loss_lamports: u64,
pub max_slippage_bps: u16,
pub kill_switch: std::sync::atomic::AtomicBool,
}
pub fn check_or_reject(
cfg: &RiskConfig,
trade: &ProposedTrade,
state: &Portfolio,
) -> Result<(), RiskError> {
if cfg.kill_switch.load(Ordering::Relaxed) {
return Err(RiskError::KillSwitchEngaged);
}
if state.daily_pnl_lamports < -(cfg.max_daily_loss_lamports as i64) {
return Err(RiskError::DailyLossExceeded);
}
if state.exposure(&trade.market) + trade.size > cfg.max_position_per_market_lamports {
return Err(RiskError::MarketExposureExceeded);
}
if trade.slippage_bps > cfg.max_slippage_bps {
return Err(RiskError::SlippageTooHigh);
}
Ok(())
}The kill-switch has saved me more than once. Wire it to a CLI, a dashboard button, and ideally an automated alert-triggered flip for extreme market conditions.
Observability: what I actually alert on
- Signal-to-fill latency (p50 / p95 / p99). Drift here is your early warning that ingest or Jito is unhealthy.
- Inclusion rate - bundles attempted vs landed. Below your baseline? Fall back to Nozomi, raise tip, or pause.
- PnL per strategy plus a daily loss breaker.
- RPC / gRPC error rate per provider - you do have a provider pool, right?
Alerts go to a single place (I use a Telegram channel with structured messages). Noisy alerts are worse than no alerts.
Deployment
Nothing exotic: Docker + systemd or Kubernetes, whichever you already know. Two operational rules:
- Treat signers like prod secrets. KMS/Turnkey where possible; never paste a private key into an env var in a shared shell.
- One binary per strategy. Shared config, independent process lifecycles. A crash in one strategy must not take the others down.
Conclusion
Building Solana trading infrastructure is not "how do I call Jupiter?". It is:
- ingest without blocking,
- route ahead of time,
- include via Jito with Nozomi as a fallback,
- enforce risk in the execution layer,
- observe the system so you notice it's sick before PnL does.
Get those right and the rest is strategy - the part you actually want to be thinking about.