Cult Mechanicus¶
label_important The essential set of product concepts investors need to understand how Ibby builds supply, evaluates fit, and turns matches into conversations.
Agent (Role / Profile)¶
A public, shareable, AI interface backed by a role or candidate's modeled claims and context, accessible via a unique URL and continuously improved via clarifying loops. chevron_right
An agent is Ibby's public, shareable interface for a specific role or a specific candidate. It is backed by a structured claim model plus authored context, and is designed to be interrogated: people can ask targeted questions and get answers grounded in the underlying claims, not generic summaries.
What an agent is¶
- A single, persistent representation of a role or a candidate
- Backed by modeled claims (structured data) and authored context (supporting detail)
- Accessible via a unique URL that can be shared like a job post or a profile
- Interrogable: reviewers can ask questions against the model to resolve ambiguity asynchronously
What it enables¶
- Consistent evaluation: roles and candidates are expressed in the same claim framework.
- Decision-ready clarity: routine screening questions can be answered without scheduling live time.
- Better first conversations: the "obvious questions" get handled up front, so live calls go deeper.
How it improves over time¶
Agents are continuously improved through clarifying loops:
- When an interrogator asks a question the system cannot answer confidently, it generates targeted follow-up questions for the author.
- The author answers once, and the agent's claim model is updated so the same ambiguity does not recur.
- Over time, the agent becomes more complete, more decisionable, and easier to compare.
Inputs and outputs¶
Inputs
- Natural-language information from the author (role owner or candidate)
- Interrogation questions from reviewers
- Clarifications provided in follow-up loops
Outputs
- Answers grounded in the modeled claims/context, or
- Targeted follow-up questions to close gaps and improve completeness
What it is not¶
- Not a static resume or job description: it evolves as new questions are answered.
- Not a generic chatbot: answers must be supported by the role/profile's claims and context.
- Not a one-off form submission: it is a durable interface meant to be shared and reused.
Where it shows up¶
- Role Agent: shared as the job description for a specific role.
- Candidate Agent: shared as the application packet for a specific candidate.
- Qualified Match Briefs (QMBs): decision-ready packets rendered from agents and kept interrogable.
Agent Interrogation¶
A Q&A interaction where someone questions an agent (role or profile) using modeled claims and authored context. It resolves ambiguity asynchronously and stays grounded in structured evidence before anyone spends live interview time. chevron_right
Agent interrogation turns a role or profile into a decision-ready artifact by letting reviewers ask the questions they would normally ask in a screening call. The system answers from modeled claims and authored context; when confidence is low, it generates targeted follow-ups for the author to fill gaps.
What it accomplishes¶
- Resolve ambiguity early: Clarify unclear or missing details without scheduling live time.
- Reduce interview waste: Shift routine screening and clarification into asynchronous interrogation so live interviews focus on deeper evaluation and mutual fit.
- Standardize comparison: Because questions operate over a consistent claim framework, different candidates or roles can be interrogated in comparable ways.
How it stays trustworthy¶
- Stays grounded in evidence: Answers tie back to specific claims and supporting context, not generic summaries.
- Exposes tradeoffs, not just highlights: Makes constraints, preferences, and dealbreakers explicit so mismatches fail fast.
How it improves over time¶
- Drives completeness: If the agent cannot answer confidently, it produces targeted follow-up questions for the author to fill gaps and improve the model.
Inputs and outputs¶
Inputs
- Modeled claims
- Authored context
- The interrogator's questions
Outputs
- An answer grounded in claims/context, or
- A targeted follow-up question set for the author to fill gaps
What it is not¶
- Not a generic chatbot: answers must tie back to claims and context, or trigger follow-up questions for the author.
- Not a resume rewrite: it clarifies and interrogates evidence, rather than polishing narrative.
- Not a live interview replacement: it clears routine ambiguity so live time can go deeper.
Example (mini)¶
- Q: "Have you owned on-call for a customer-facing system? What broke and how did you respond?"
- A: "Yes. Summarizes an incident, actions taken, and outcomes, tied to reliability and ownership claims."
- If missing: "Follow-up: Describe the largest incident you led end-to-end (impact, actions, learnings)."
Phase 1 utility¶
During Phase 1, agent interrogation is the primary way Ibby collects structured, decision-ready information without asking either side to fill out long forms:
- Candidates: Submit a candidate agent as part of applying. Employers interrogate it to clarify scope, depth, ownership, and tradeoffs before scheduling live time.
- Companies: Publish a role agent as the job description. Candidates interrogate it to clarify must-haves vs nice-to-haves, true responsibilities, and level expectations before investing time.
Relationship to QMBs¶
Agent interrogation is the primary pathway to producing a Qualified Match Brief (QMB): it turns a role or profile into decision-ready, interrogable evidence.
Archetype¶
A standardized talent bucket (department-shaped), defined by a shared work function and shared evaluation model. It groups many job titles under one comparable set of claims, used for liquidity and activation thresholds. chevron_right
Individual candidates can have overlap, but the Archetype is defined by the dominant work function and evaluation model.
If evaluating a role fit would require a different rubric, it is a different archetype.
Key properties¶
- Function-based, not title-based: "Software engineering" not "Senior Backend Engineer"
- Stack-agnostic and platform-agnostic: languages and platforms vary; evaluation dimensions stay comparable
- Seniority is a separate layer: the archetype defines "what kind of work"; seniority defines "at what level"
- Activation unit: the unit Ibby uses to measure critical mass in Phase 1 and turn on paid QMB delivery in Phase 2
Archetype 1: Software Engineering¶
Definition: Roles whose primary output is building and shipping software systems. Titles, languages, and platforms vary, but the work is fundamentally "produce software" rather than "administer software."
Included roles (examples, not exhaustive)
- Software engineer, developer, programmer, application engineer
- Frontend, backend, full-stack
- Platform engineering, SRE, and DevOps roles where programming is core (automation, CI/CD, infrastructure-as-code, reliability engineering via code)
Adjacent Archetypes (separate buckets by design)
| Archetype | Description | Example Roles |
|---|---|---|
| QA, Testing | Validating software quality through test planning, execution, and defect reporting | QA Analyst, Tester, SDET, QA Lead |
| Data Engineering | Building and operating data pipelines, warehouses, and analytics infrastructure | Analytics Engineer, ETL/ELT Developer, Data Warehouse Engineer, BI/Reporting Engineer |
| Security Engineering | Hardening systems, preventing vulnerabilities, and responding to threats | AppSec Engineer, Security Architect, DevSecOps Engineer, DFIR Engineer |
| ML Engineering | Building, deploying, and operating machine learning models and ML platforms | AI Engineer, NLP Engineer, Computer Vision Engineer |
| Traditional IT | Administering end-user or business systems | Help desk, desktop support, sysadmin |
These are intentionally separate archetypes, because:
- success is not primarily expressed through building software systems, and
- the core rubric differs enough that pooling them would blur evaluation quality
Bounded Flow¶
An intentionally constrained, continuous delivery cadence of Qualified Match Briefs for a role. It provides steady qualified optionality without turning hiring into volume triage, and keeps matching aligned to the latest claim-based model updates. chevron_right
Bounded flow is Ibby's choice to sell qualified throughput, not applicant volume: you receive a steady stream of Qualified Match Briefs (QMBs) for a single role, capped on purpose to protect attention and preserve decision quality.:contentReference[oaicite:3]{index=3}
What it accomplishes¶
- Prevents volume triage: Keeps the review workload predictable so teams do not revert to "screening as a full-time job."
- Sustains real optionality: You see enough qualified candidates to make progress without being flooded.
- Improves decision speed: A smaller, higher-quality stream reduces coordination overhead and makes "next steps" clearer.
Why it is bounded (not batched)¶
Ibby aims for continuous flow rather than a single lump sum so each new brief benefits from the most up-to-date claim-based modeling and follow-up learning; the system gets more accurate over time, and the output stays aligned to the evolving role definition.:contentReference[oaicite:4]{index=4}
How it stays "qualified"¶
- Hard-constraint adherence: briefs must meet the declared non-negotiables for the role.
- Decision-ready minimum: briefs are held to a completeness threshold so reviewers can actually decide "why/why not" without guesswork.
- No pay-to-flood: increasing activity should change cadence/concurrency, not reduce the bar for what counts as a QMB.
What it is not¶
- Not inbound leads: it is not a queue of applicants that you must exhaust to find signal.
- Not "more is better": volume is not the product; quality and decisionability are.
- Not a one-time dump: it is a controlled cadence that keeps the model and results current.
Competitive framing¶
Job boards normalize overload; Ibby positions bounded flow as the "adjustable goldilocks" alternative: enough throughput to hire, not enough noise to drown the process.:contentReference[oaicite:5]{index=5}
Claim-Based Fit Modeling¶
A natural-language driven process that converts role and candidate input into structured claims about requirements, constraints, preferences, and evidence. The system classifies what is provided, asks targeted follow-ups for what is unclear or missing, and produces a defined model that can be rendered into human-readable content. chevron_right
Claim-based fit modeling is how Ibby turns messy, inconsistent hiring information into a defined, comparable data model for both roles and candidates. Users (both companies and candidates) can submit whatever information they have (in their own words); the system classifies it into a shared claim framework, identifies gaps or ambiguity, and asks targeted follow-up questions until the model meets a decisionable completeness standard.
What it produces¶
- A structured claim model for a role or a candidate, using the same evaluation dimensions on both sides.
- Claims expressed as structured data that can be rendered into clear, human-readable statements.
- A separation of hard constraints vs preferences, with supporting evidence attached where available.
How it works¶
- Collect: Users provide natural-language input (as much or as little as they want).
- Classify: The system maps input into a shared claim framework (not titles, not keywords).
- Clarify: The system asks targeted follow-ups for areas that are unexplored, unclear, or internally inconsistent.
- Consolidate: The result is a defined model of data for that role or candidate that can be compared, interrogated, and updated over time.
Why "claims" matter¶
- Explicit: Assumptions become visible and reviewable instead of implied.
- Comparable: Different titles, stacks, and backgrounds can be evaluated using the same dimensions.
- Interrogable: Any claim can be questioned and refined to resolve ambiguity asynchronously.
- Evidence-aware: Claims can carry supporting context, examples, and provenance.
Ongoing, not one-time¶
Claim-based fit modeling is the same process used up front and over time: as new information arrives (or new questions are asked), the system updates the claim model rather than creating disconnected versions of a profile or role.
What it is not¶
- Not keyword matching: It does not treat resumes or job posts as bags of terms.
- Not title matching: Titles are inputs, not the basis of evaluation.
- Not a static form: It adapts to what is provided and actively drives toward completeness.
Where it shows up¶
- Candidate and role agents: The claim model is what the agent is built from.
- Qualified Match Briefs (QMBs): QMBs are rendered from claims plus supporting evidence, and stay interrogable.
Claims¶
Standardized, structured statements about a role or candidate -- e.g. "must have X," "has done Y," "prefers Z" -- designed to be comparable and computable across many dimensions. chevron_right
Claims are Ibby's core unit of hiring information: standardized, structured statements about a role or a candidate. They translate natural-language input into a shared format that can be compared, interrogated, and computed across many dimensions.
What claims are¶
- Structured statements that represent requirements, constraints, preferences, capabilities, and evidence
- Standardized so different roles and candidates can be evaluated using the same dimensions
- Stored as structured data and renderable into clear human-readable language
Common types of claims¶
- Requirements: "Must have X" / "Cannot require Y"
- Capabilities: "Has done Y" / "Can do Z"
- Preferences: "Prefers A over B" / "Open to C"
- Constraints: location, work authorization, scheduling, travel, compensation boundaries (when applicable)
- Evidence and scope: concrete examples, depth, ownership, outcomes, and context that support a claim
How Ibby builds and uses claims (the "secret sauce")¶
Ibby uses claims to turn unstructured hiring information into a fast, scalable matching system:
- Natural-language input becomes structured data: what someone writes or says is classified into standardized claims.
- Structured claims enable fast narrowing: claims are indexed so Ibby can quickly search and filter for potential qualified matches at scale.
- Deep reasoning happens on a short list: after narrowing, Ibby applies higher-cost, deeper AI analysis to the reduced candidate set to assess nuanced fit, tradeoffs, and risk.
This creates a practical balance of speed, time, and cost: cheap, broad filtering first; expensive, high-quality reasoning only where it matters.
Why claims matter¶
- Comparable: Different titles, stacks, and backgrounds can be evaluated on the same axes.
- Computable: Claims can be scored, ranked, and mapped to identify match strength, gaps, and risks.
- Interrogable: Any claim can be questioned to resolve ambiguity and expose tradeoffs.
- Durable: A claim persists and improves over time as clarifications and evidence are added.
How claims are created and improved¶
Claims are produced through claim-based fit modeling:
- Users provide information in natural language.
- The system classifies it into claims within a shared framework.
- If something is unclear, missing, or inconsistent, the system asks targeted follow-up questions.
- The claim model is updated, improving completeness and reliability over time.
What claims are not¶
- Not keywords: claims carry meaning and structure, not just terms.
- Not titles: titles can inform claims, but claims define the work and fit.
- Not ungrounded assertions: when confidence is low, claims trigger follow-ups or require evidence to become decisionable.
Where claims show up¶
- Agents: role and candidate agents are built from claims plus authored context.
- Agent interrogation: questions are answered by referencing claims and their supporting context.
- Qualified Match Briefs (QMBs): briefs are rendered from claims, highlighting alignments, gaps, and decision-critical details.
Qualified Match Brief¶
Standardized, anonymized info packet describing a candidate or role, surfacing the most relevant claims and context needed to decide whether to proceed. chevron_right
The QMB is generated from a candidate's agent and is designed to answer the first set of hiring questions with structured, comparable evidence. Companies review the content and can directly interrogate the candidate agent before spending live time.
High match threshold¶
Ibby only delivers QMBs that:
- Meet your declared hard constraints
- Meet a minimum "decisionable" completeness standard (internally enforced)
- Are not obviously off-target on role archetype or seniority
- Clear a minimum match-strength threshold (we do not send poor fits)
Each QMB is:¶
-
Connected to the candidate's agent: you can interrogate the brief to drill into claims, evidence, and tradeoffs, rather than guessing from a resume snapshot.
-
Anonymized by default: identity signals are withheld early so you can evaluate fit on work-relevant evidence first, reducing noise and bias and keeping the process focused on role requirements. Identity is revealed only when both sides are ready to proceed.
-
Pre-filtered for strength: Ibby applies quality and fit gates before you ever see the packet, so your review time is spent comparing plausible hires, not rejecting obvious mismatches.
Shortlist¶
A small set of Qualified Match Briefs (e.g., 3–7) delivered for a specific role, intentionally constrained to avoid volume-driven sifting. chevron_right
A shortlist is the primary delivery unit companies receive from Ibby: a small, decision-focused set of Qualified Match Briefs (QMBs) for a specific role. It is intentionally constrained (typically 3-7) to prevent volume-driven sifting and to keep the hiring process centered on careful evaluation of a few strong fits.
What a shortlist is¶
- A curated set of QMBs delivered for one specific role
- Sized to be reviewable in one sitting (typically 3-7)
- Built to support "compare and decide," not "scan and discard"
Why the shortlist is small¶
- Protects hiring time: small sets are realistic to read, interrogate, and discuss.
- Reduces false negatives: volume forces shallow screening; small sets encourage deeper evaluation.
- Aligns incentives: Ibby is optimized for match quality and hiring velocity, not applicant throughput.
How Ibby builds a shortlist¶
- Start with role constraints: only candidates meeting declared hard constraints are eligible.
- Apply quality gates: only decisionable, complete briefs are included.
- Filter for match strength: poor fits are excluded before delivery.
- Deliver with clarity: each QMB is interrogable via the candidate's agent, so reviewers can quickly validate assumptions and drill into open questions.
How it is used¶
- Review and interrogate: teams read the briefs and ask targeted questions against candidate/role agents.
- Decide next steps: schedule first conversations with the strongest fits, or request more clarification before moving forward.
- Close the loop: if none are strong, adjust role assumptions/claims and regenerate a better-aligned shortlist.
What it is not¶
- Not an inbox of applicants: it is not meant for browsing or bulk triage.
- Not a ranking-only list: it is a set of decision-ready packets with explicit evidence and tradeoffs.
- Not "more is better": Ibby treats volume as a failure mode, not a feature.
Relationship to QMBs and Handshake¶
A shortlist is a bundle of QMBs. Once a company affirms interest in a QMB and the candidate also affirms, the Ibby Handshake completes and an introduction is made.
Signal Density¶
A measure of how complete and decision-ready a role or candidate's structured claims are across key dimensions, enabling reliable matching and filtering.
The Ibby Handshake¶
Our two-sided commitment step that turns a promising match into a real first conversation. It gates identity exchange until both sides explicitly opt in, and sets an expectation of timely follow-through. chevron_right
The Ibby Handshake is a two-sided commitment step that converts a promising match into a reliable first conversation. It keeps candidates anonymized until both sides explicitly opt in, then exchanges the minimum information needed to move forward and sets an expectation of timely follow-through.
How it works¶
- Company reviews an anonymized candidate Qualified Match Brief (QMB) and can interrogate the underlying agent/context model.
- If the company wants to proceed, it affirms interest. In Ibby, that affirmation means: if the candidate also affirms, the company commits to timely follow-through (schedule a real first conversation or explicitly close the loop).
- Candidate reviews the company/role QMB (standardized; not anonymized) and can interrogate the same modeled context.
- If the candidate affirms, the handshake completes: Ibby de-anonymizes the candidate and exchanges the minimum contact/profile details needed for an immediate introduction (e.g., identity and a contact method), so the first conversation can be scheduled right away.
Expectations (what "commitment" means)¶
A handshake is not "maybe." It is a lightweight, explicit commitment to one of the following outcomes within a reasonable window:
- Schedule a first conversation, or
- Decline and close the loop clearly (no ghosting), or
- Request clarification via interrogation if a specific open question is blocking the decision.
Why we do it¶
- Reduces spam and wasted cycles by requiring intent on both sides before exchanging identities.
- Protects candidates from being "reviewed" endlessly without follow-through.
- Protects companies by ensuring candidates are opting in to this specific role, not spraying applications everywhere.
- Makes the first conversation reliable by turning "interest" into an explicit, system-mediated commitment with clear next steps.
Accountability and enforcement¶
Handshake outcomes are tracked as a reliability signal. Repeatedly affirming interest without timely follow-through may trigger visibility penalties, cool-downs, or other restrictions to protect the quality of the marketplace.
The Ibby Handshake is a sharp network-effect nucleus: if it reliably produces actual conversations, it becomes the place both sides prefer to be (classic marketplace network externalities).
See Ibby Handshake Policy for complete details.
Timing Neutrality¶
The principle that match visibility and outcomes should not depend on who arrived first; candidates are evaluated consistently over time rather than rewarded for speed.