Overview
A job executes your pipeline repeatedly for each line in a wordlist, using multiple threads for high-speed parallel processing. Jobs track progress, collect hits, and provide live statistics.Creating a Job
Click New Job
Choose the job type:
- Config Job: Run your current pipeline
- Proxy Check: Test a proxy list
Configure job settings
Set job parameters:
- Name: Descriptive name for this job
- Thread count: Number of concurrent workers (default: 100)
- Wordlist: Path to your data file
- Proxy list: (Optional) Path to proxy file
Job Types
Config Job
Runs your pipeline configuration against a wordlist:Proxy Check Job
Tests proxies for availability and speed:Thread Configuration
Choosing Thread Count
Thread count determines how many requests run simultaneously:- Low (10-50): Slower targets, rate-limited APIs
- Medium (50-200): Most use cases, balanced speed
- High (200-1000): Fast targets, proxy rotation
- Very High (1000+): Proxy checking, simple requests
Advanced Threading Options
In your pipeline’s Runner Settings:- Start threads gradually: Ramp up slowly to avoid initial burst
- Gradual delay: Milliseconds between thread starts (default: 100ms)
- Automatic thread count: Let IronBullet optimize threads
- Concurrent per proxy: Limit requests per proxy simultaneously
- Lower threads on retry: Reduce threads when retrying failed items
Monitoring Execution
The Jobs panel displays real-time statistics:Progress Bar
Live Statistics
| Metric | Value | Description |
|---|---|---|
| CPM | 1,245 | Checks per minute (current speed) |
| Hits | 23 | Successful results |
| Fails | 15,320 | Failed checks |
| Errors | 12 | Network/parsing errors |
| Retries | 45 | Items being retried |
| Active Threads | 198/200 | Currently working threads |
| Elapsed | 00:12:34 | Time since job started |
| Remaining | ~00:03:28 | Estimated time to completion |
Job States
- Queued: Job created but not started
- Running: Currently executing
- Paused: Temporarily stopped (can resume)
- Stopped: Manually stopped by user
- Completed: Finished processing all data
Job Controls
Pause/Resume
Click Pause to temporarily stop execution:- Threads finish current items
- Progress is saved
- Click Resume to continue from where you left off
Stop
Click Stop to terminate the job:- All threads are stopped immediately
- Partial results are saved
- Job cannot be resumed (must start new job)
Delete
Remove a completed or stopped job:- Clears the job from the list
- Results are removed (unless saved to file)
Results & Hits
Viewing Hits
Click a job to view its hits in the Hits Database panel:- Data line: Original wordlist entry
- Captures: Variables marked as captures in your pipeline
- Proxy: Proxy used (if applicable)
Exporting Results
Jobs automatically save hits to files based on your Output Settings: Output directory:results/
File format:
- Save to file: Enable/disable file output
- Output directory: Where to save results
- Output format: Custom template with variables
- Include response: Save full HTTP responses
Data Source Options
File Path
Load wordlist from a file:Inline Data
Paste data directly (useful for small tests):Skip & Take
Process a subset of data:- Skip: Skip first N lines (useful for resuming)
- Take: Process only N lines (0 = all)
Proxy Configuration
Per-Job Proxy Override
Override your pipeline’s proxy settings for a specific job: Pipeline mode: Use pipeline’s proxy settings (default) File mode: Use a different proxy file for this jobThis lets you run multiple jobs with the same pipeline but different proxy sources.
Multiple Jobs
Run multiple jobs simultaneously:- Has its own thread pool
- Uses separate proxy sources
- Tracks independent statistics
- Saves results to separate files
Job History
View past jobs in the Jobs panel:- Job name and creation time
- Final statistics (hits, fails, CPM)
- Completion time and duration
- Click to view saved hits
Performance Optimization
Maximizing Speed
- Increase threads: More threads = higher CPM
- Use proxies: Avoid IP-based rate limiting
- Optimize pipeline: Remove unnecessary parsing blocks
- Disable safe mode: Faster execution (but errors aren’t caught)
- Reduce timeout: Lower timeout for faster fails
Reducing Errors
- Lower thread count: Reduce pressure on target
- Add gradual start: Ramp up slowly
- Increase timeout: Give requests more time
- Use better proxies: Higher quality = fewer errors
- Enable retries: Retry failed items automatically
Troubleshooting
CPM is very low (< 100)
CPM is very low (< 100)
- Target is slow or rate-limited
- Increase threads if target can handle it
- Check proxy quality (dead proxies slow things down)
- Reduce timeout to fail faster on dead endpoints
Many errors (> 10%)
Many errors (> 10%)
- Too many threads overwhelming target
- Bad proxies or proxy timeout too low
- Network issues or target blocking requests
- Enable retry logic in Runner Settings
No hits but data is valid
No hits but data is valid
- Test pipeline in Debug Mode first
- Verify KeyCheck conditions are correct
- Check that parsing blocks extract variables properly
- Ensure test data matches wordlist format
Job stuck at 0% or not starting
Job stuck at 0% or not starting
- Check wordlist file path is correct
- Verify wordlist file is not empty
- Ensure sufficient memory for large wordlists
- Check logs for sidecar startup errors