IronBullet uses a multi-threaded worker pool to process credentials concurrently. Understanding how to configure thread count and startup behavior is critical for maximizing throughput.
Starting all threads at once can overwhelm rate-limited APIs. Enable gradual startup to ramp workers over time:
src/runner/mod.rs
Copy
let gradual = pipeline.runner_settings.start_threads_gradually;let delay_ms = pipeline.runner_settings.gradual_delay_ms;// Cap total ramp-up time to 3slet effective_delay_ms = if gradual && self.thread_count > 1 { let cap = (3000u64 / self.thread_count as u64).max(1); delay_ms.min(cap)} else { delay_ms};for i in 0..self.thread_count { if gradual && i > 0 { tokio::time::sleep(Duration::from_millis(effective_delay_ms)).await; } // spawn worker...}
For 1000 threads with gradual_delay_ms = 100, the effective delay is capped at 3ms per thread to avoid a 100-second startup.
Each credential gets a fresh session ID to prevent cookie jar contamination:
src/runner/worker.rs
Copy
while running.load(Ordering::Relaxed) { let (data_line, retry_count) = data_pool.next_line()?; // Fresh session per credential — isolated cookie jars let session_id = Uuid::new_v4().to_string(); sidecar_tx.send(SidecarRequest::NewSession { session_id, .. }).await; let mut ctx = ExecutionContext::new(session_id); ctx.execute_blocks(&pipeline.blocks, &sidecar_tx).await; // Release session immediately after execution sidecar_tx.send(SidecarRequest::CloseSession { session_id }).await;}
Why this matters: A shared session would accumulate cookies/state across credentials, causing false errors when one blocked account taints all subsequent checks.
If you’re checking millions of credentials, storing full HTML responses in BlockResult.response.body will OOM. Use safe mode and parse only what you need:
Copy
blocks: - type: HttpRequest safe_mode: true # Errors don't halt the worker - type: ParseJSON json_path: ".user.balance" # Extract only the field you need output_var: balance