Skip to main content

Data Settings Overview

Data settings control how IronBullet reads and parses input data. Each line in your wordlist is split according to these rules and mapped to input.* variables.
Data settings transform raw text lines like user@example.com:password123 into structured variables like input.USER and input.PASS.

Data Settings Structure

From src/pipeline/mod.rs:66-81:
pub struct DataSettings {
    pub wordlist_type: String,  // Preset format (Credentials, Url, Custom)
    pub separator: char,         // Character to split on (':' for credentials)
    pub slices: Vec<String>,     // Names for each split part
}

impl Default for DataSettings {
    fn default() -> Self {
        Self {
            wordlist_type: "Credentials".into(),
            separator: ':',
            slices: vec!["USER".into(), "PASS".into()],
        }
    }
}

Configuration Fields

FieldTypeDescription
wordlist_typeStringPreset format name (Credentials, Url, Custom, etc.)
separatorCharCharacter to split each line on
slicesArray of stringsOrdered names for the split parts

Wordlist Types

Common preset formats:

Credentials

Format: username:password
{
  "wordlist_type": "Credentials",
  "separator": ":",
  "slices": ["USER", "PASS"]
}
Example data:
john@example.com:password123
alice:secretpass
bob@test.com:myp@ssw0rd
Variables:
  • input.USERjohn@example.com
  • input.PASSpassword123

Email:Password:Extra

Format: email:password:extra
{
  "wordlist_type": "Custom",
  "separator": ":",
  "slices": ["EMAIL", "PASS", "EXTRA"]
}
Example data:
user@gmail.com:pass123:backup@email.com
alice@yahoo.com:secret:+1234567890
Variables:
  • input.EMAIL
  • input.PASS
  • input.EXTRA

URL

Format: Single column (no separator)
{
  "wordlist_type": "Url",
  "separator": ":",
  "slices": ["URL"]
}
Example data:
https://example.com/page1
https://example.com/page2
https://test.com/api/endpoint
Variables:
  • input.URL → full line

Custom

Format: Pipe-separated values
{
  "wordlist_type": "Custom",
  "separator": "|",
  "slices": ["ID", "NAME", "EMAIL", "PHONE"]
}
Example data:
1001|John Smith|john@example.com|555-0123
1002|Alice Jones|alice@test.com|555-0124

Example Configuration

From configs/example.rfx:87-91:
{
  "data_settings": {
    "wordlist_type": "Credentials",
    "separator": ":",
    "slices": ["USER", "PASS"]
  }
}
This parses each line of your wordlist:
user1@example.com:password123
Into variables:
  • input.USER = user1@example.com
  • input.PASS = password123

Data Source Types

From src/runner/job.rs:46-60:
pub struct DataSource {
    pub source_type: DataSourceType,
    pub value: String,
}

pub enum DataSourceType {
    File,          // Load from file path
    Folder,        // Load all files in folder
    Url,           // Download from URL
    Inline,        // Direct text input
    Range,         // Generate numeric range
    Combinations,  // Generate combinations
}
Load data from a text file:
{
  "source_type": "File",
  "value": "/path/to/wordlist.txt"
}
File contents:
user1@example.com:pass123
user2@example.com:pass456
user3@example.com:pass789
Load all .txt files from a directory:
{
  "source_type": "Folder",
  "value": "/path/to/wordlists/"
}
Combines all files:
/path/to/wordlists/list1.txt
/path/to/wordlists/list2.txt
/path/to/wordlists/list3.txt
Download wordlist from HTTP/HTTPS:
{
  "source_type": "Url",
  "value": "https://example.com/wordlist.txt"
}
Downloads and parses the remote file.
Paste data directly into the config:
{
  "source_type": "Inline",
  "value": "user1:pass1\nuser2:pass2\nuser3:pass3"
}
Useful for testing or small datasets.
Generate numeric sequences:
{
  "source_type": "Range",
  "value": "1-1000"
}
Generates:
1
2
3
...
1000
Access via input.NUMBER or your configured slice name.
Generate combinations from multiple lists:
{
  "source_type": "Combinations",
  "value": "users.txt:passwords.txt"
}
If users.txt has 100 users and passwords.txt has 50 passwords, generates 5,000 combinations.

Separator Characters

Common separators:
CharacterUse CaseExample
: (colon)Credentials, standard formatuser:pass
`` (pipe)CSV-like data`idnameemail`
, (comma)CSV filesjohn,smith,john@example.com
\t (tab)TSV filesuser\tpass\temail
; (semicolon)European CSVname;email;phone
(space)Space-delimiteduser pass extra
If your data contains the separator character (e.g., password contains :), consider using a different separator or escaping.

Slice Mapping

Slices are mapped in order to the split parts:

Example Configurations

{
  "separator": ":",
  "slices": ["USER", "PASS"]
}
Input: alice@test.com:secretpassVariables:
  • input.USER = alice@test.com
  • input.PASS = secretpass

Data Pool Processing

From src/runner/data_pool.rs:4-57, the data pool manages how lines are consumed:
pub struct DataPool {
    lines: Vec<String>,
    index: AtomicUsize,
    retry_queue: Mutex<Vec<(String, u32)>>,
}

impl DataPool {
    pub fn next_line(&self) -> Option<(String, u32)> {
        // First check retry queue
        if let Ok(mut queue) = self.retry_queue.lock() {
            if let Some(entry) = queue.pop() {
                return Some(entry);
            }
        }
        // Then get next line from main pool
        let idx = self.index.fetch_add(1, Ordering::Relaxed);
        self.lines.get(idx).map(|l| (l.clone(), 0))
    }

    pub fn return_line(&self, line: String, retry_count: u32) {
        if let Ok(mut queue) = self.retry_queue.lock() {
            queue.push((line, retry_count));
        }
    }
}
When a block returns BotStatus::Retry, the data line is added back to the retry queue and will be processed again.

Runner Settings for Data

From src/pipeline/mod.rs:173-233, you can control how data is processed:
pub struct RunnerSettings {
    pub threads: u32,                  // Concurrent workers
    pub skip: u32,                     // Skip first N lines
    pub take: u32,                     // Process only N lines (0 = all)
    pub max_retries: u32,              // Max retries per line
    pub continue_statuses: Vec<BotStatus>, // Statuses that re-queue
}

Example: Process subset of data

{
  "runner_settings": {
    "threads": 100,
    "skip": 1000,        // Skip first 1000 lines
    "take": 5000,        // Process next 5000 lines
    "max_retries": 3
  }
}
This would process lines 1001-6000 from your wordlist.

Using Input Variables

Once parsed, access input data in any block using <input.SLICE_NAME>:
{
  "block_type": "HttpRequest",
  "settings": {
    "method": "POST",
    "body": "username=<input.USER>&password=<input.PASS>"
  }
}

Best Practices

Clean Data

  • Remove duplicate lines
  • Trim whitespace
  • Validate format consistency
  • Remove empty lines

Naming

  • Use descriptive slice names
  • Prefer UPPERCASE
  • Match your domain (USER vs EMAIL vs USERNAME)

Performance

  • Use skip/take for testing subsets
  • Start with small wordlists
  • Gradually increase thread count

Format

  • Keep separators consistent
  • Document your format
  • Use standard types when possible

Troubleshooting

Problem: input.USER is empty even though wordlist has data.Solutions:
  • Check separator matches your data format
  • Verify slice names match variable references
  • Ensure no extra spaces around separator
  • Check for UTF-8/encoding issues
Problem: Lines have different number of separator characters.Solutions:
  • Clean your wordlist to consistent format
  • Choose a separator not present in data
  • Use only as many slices as minimum split count
Problem: Data contains separator character (e.g., : in password).Solutions:
  • Use different separator (e.g., | or \t)
  • URL-encode the data
  • Escape the character
  • Use a different wordlist format