Who I am, what AICAP HR is, and why this blueprint exists.
Harsimran Kaur Kapoor
Fifteen years building people infrastructure that actually gets used. Hypergrowth startups, mid-market, and Fortune 500 spanning 17 countries.
- Mastercard7 yrs · HRBP, AI Centre of Excellence · BU scaled 200 → 1,000 HC
- Ballard Power Systems2017-2019 · activated Oracle Fusion HCM · built L&D from zero
- Snapdeal2015-2017 · SAP SuccessFactors deployment · scaled 1,300 → 10,000 HC
CPHR, MBA, Google AI Professional Certificate. Two proof points I'm most known for: an 89% internal redeployment rate on a 200-person divestiture (industry norm is 50%), and contribution to a talent mobility framework that scaled to 39,000 employees globally. Vendor-agnostic by design: I've activated enterprise HCM platforms (Oracle Fusion, SAP SuccessFactors) where they sat unused, and learned the difference between buying a tool and getting a team to use it.
What is AICAP HR Consulting?
AICAP HR Consulting helps scaling companies identify where AI eliminates HR administrative work, and builds the capability to use it without creating new risks. The focus is the awkward middle: too big for spreadsheets, too small for enterprise HCM.
- Typical client200 to 1,500 employees
- Sweet spotPre-IPO and late-stage growth
- Three offersAI HR Audit · Capability Workshops · Outcome Partnership
- ApproachOperator, not vendor. Designs for adoption, not just deployment.
- North starHR as strategy, not admin
How AICAP HR delivers
AICAP HR is an operator-architect, not a software shop. AICAP HR designs the architecture, governance, policies, and adoption plan. The engineering layer is delivered through one of three paths chosen with the client based on the problem we're solving:
- Client engineeringClient's own team builds to AICAP HR spec; AICAP HR runs design reviews and sign-off
- Partner integratorEngineering delivered by a systems integrator AICAP HR brings in or the client chooses
- Vendor stackEngagement anchors on a vendor selection that supports the requirements out of the box
AICAP HR is vendor-agnostic. The tooling on any engagement is sourced based on the client's data, regulatory posture, scale, existing stack, and team capacity. No kickbacks, no defaults, no opinionated lock-ins.
Why this blueprint exists
Most HR teams at growing companies run on good instincts and tired spreadsheets. Scaling from 300 → 1,000 employees typically means tripling HR headcount - or finding a different way. AI is the different way: letting it absorb the repeatable parts of performance management, calibration, succession, retention signals, comp benchmarking, and policy Q&A.
What you're about to see is one worked example - performance management and succession planning, end to end - running on a model HC report for an anonymized tech company in the transportation business. It's the shape of engagement AICAP HR delivers: a concrete artifact, not a slide with bullet points.
The AICAP HR thesis in one paragraph
What this blueprint is running on. And what's missing.
AICAP HR Exec Deck
What it is: Executive framing for AICAP HR's AI-in-HR blueprint. Why now, the 13-process landscape, the PM + Succession worked example, ROI case, governance posture.
What's of use: The strategic context for why this work matters. This walkthrough assumes you haven't seen the deck - I've compressed the relevant parts into Chapter 1.
AICAP HR Automation Matrix
What it is: The full landscape - 13 HR processes scored on automation potential, business value, and risk. Plus the skills inference model and the VP Ops succession scenario.
What's of use: Context for where PM + Succession sit in the broader portfolio. Everything in the demo is process #5 and #6 on the matrix.
What's in the model HC report - fields the AI uses
Identity & structure
- EID (employee ID)E1 - E300
- Job family12 families
- Level (IC / M1-M3 / VP+)✓
- Manager EID✓ (hierarchy)
- Geography10 countries
- Tenure / time in role✓
Performance & signals
- FY24 + FY25 rating✓ 263 / 263
- Rating delta (Δ)✓
- Goal attainment %✓
- Engagement score (1-5)✓
- Flight-risk flag✓ Low / Med / High
- Promotion-ready flag✓
What's missing - and why I'm naming it
❌ Missing entirely
- Skills ontologyCritical · Month 6
- Skills per employeeCritical · Months 6 to 9
- AI governance + audit logCritical · Month 3
The current demo's succession slate uses performance + role-family signals, not true skills matching. That's Phase 2 - and requires the data inputs above before it can be trusted.
⚠ Partial / not connected
- Role architectureLevels yes; role cards no
- Collaboration signalsExists; not in HRIS
- Learning recordsHours yes; skills-tagged no
- Business outcome dataSiloed from HR
These unlock better signals but aren't blockers to the Phase 1 demo you're about to see.
What AI in HR actually looks like - modeled end-to-end on a scaling tech company.
You don't know me yet. The next twelve minutes will fix that. AI in HR, modeled end-to-end on a 300-person tech company. Performance, calibration, succession. The operator's view, not a vendor pitch.
Seven short chapters. Pick your lane.
Context
Who I am. What AICAP HR is. Why AI in HR, why now.
Sources & data
The model HC report, the deck, the automation matrix - all openable.
Live demo
Performance review → calibration → succession. Click to run.
Q&A + wrap
The questions I get most. And how to reach me.
Generating a manager's FY25 review - in 90 seconds.
Evidence AI has access to
- FY24 review + notes✓
- FY25 goal attainment5 goals, 72% avg
- 1:1 notes (last 12 mos)38 entries
- Project artifactsTier-1 agency rollout
- Stakeholder signals2 customer cmdrs
- Peer feedback6 peers
Missing / low confidence
- Self-assessmentPending
- Skip-level inputRequested
- External customer CSATPartial
- Cross-fn peer voices3 of 6 returned
Before the calibration meeting - AI has already done the prep.
Calibration table · Operations / M1-M3
| EID | Role | Level | Tenure | FY24 | FY25 | Δ | Goals | Eng. | Flight | AI flag |
|---|
Distribution check. FY25 ratings for this Ops management group skew slightly high versus the broader Operations family (3.67 vs 3.31). Recommend committee probe whether this reflects genuine performance or manager-level rating drift.
Outliers to discuss. E21 (Exceeds, ↑ from FY24 Needs Imp.) - dramatic recovery; validate with skip-level and peer evidence. E48 (Outstanding, ↑) - second year rising; calibrate against M1 bar. E236 - stable Exceeds but engagement at 2.6 is lowest in cohort; potential manager gap masked by results.
Retention overlay. E21 and E49 are both flagged High flight-risk. If either of these roles vacates pre-IPO, the Ops bench depth is tested immediately (see next chapter - succession).
Recommended committee focus: 15 min on E21 (validate rating jump + trigger retention playbook pre-meeting), 10 min on E49 (flight risk + engagement at 2.9), 5 min on E236 (engagement decline trend).
VP, Operations - the seat just opened. Who's ready?
VP, Operations
Ranked successor slate · AI output
✓ What AI did automatically
- Scanned 263 active employees; narrowed to 13 viable matches for role profile
- Scored each on readiness (rating trajectory + tenure + level proximity + promo flag)
- Drafted rationale for top 5 from performance history + signals
- Flagged Candidate #2's flight-risk + triggered retention playbook
- Suggested development moves for gap-to-ready candidates (#3, #4)
- Assembled this slate in 90 seconds - normally a 2-week manual process
✕ What humans still decide
- Which candidate actually gets the role - CEO + CHRO judgment
- External benchmark: hire an outside bar-raiser instead?
- Cultural fit judgment - AI can't read the room
- Development moves for Candidate #1 to close engagement gap
- Communication plan: how, when, and to whom the slate is shared
- Accountability for the retention intervention on Candidate #2
The questions I get most. Answered here.
They shouldn't trust it as a final product. They should treat it as a first draft built from evidence they would otherwise have to reassemble themselves - 1:1 notes, goal attainment, peer feedback, prior ratings. The manager is the final author. What AI removes is the blank-page problem and the reconciliation tax.
In the demo, the bias scan surfaced two weaknesses in its own draft - recency bias and a vague growth area. That's the model's job: make itself challengeable.
Three mechanisms designed into the architecture, in order:
1. Evidence-anchored drafts. The AI is only allowed to cite from real evidence the manager already has: goals, 1:1 notes, ratings, peer entries. No vibes, no inferred personality. If there's no evidence, it says so. AICAP HR specifies what the AI is allowed to read and how it must cite; the engineering layer enforces those rules.
2. Automated bias check. Every output runs through a second pass that flags recency bias, gendered language, vague growth areas, over-weighted single events, and any pattern where ratings systematically favour one group over another. AICAP HR designs what to check for; the build team or vendor implements the checks.
3. Calibration as cross-check. The calibration pre-read (Chapter 4) surfaces rating drift across managers - so individual bias can't hide in aggregate. AICAP HR designs the calibration methodology and the committee playbook; managers run the conversation.
None of this makes bias impossible. It makes it visible, which is the condition for fixing it.
The architecture AICAP HR designs keeps employee data inside the company. The AI runs in an environment the company controls (a private setup, a vendor's enterprise-tier offering, or a company-managed instance), HR data stays in the HRIS, and no employee records get sent to public AI services. AICAP HR writes the data-handling policy and the architecture spec; the client's engineering team or a partner systems integrator builds to it.
Phase 2 (once governance is defined - month 3 onward) allows for opt-in metadata-only signals from calendar and collaboration tools, with employee-level opt-out always available. Full audit log of every AI decision from day one - AICAP HR specifies the log schema and review cadence; the engineering layer implements it.
A skills ontology is a structured list of the skills the company needs to run - something like "P&L ownership at $50M scale," "incident-command decision-making," "B2B contract negotiation." A good ontology lets you ask real questions: who in the company has the skills the open VP role requires, including people outside Operations?
Most companies in this size range don't have one yet. The current demo uses performance + role-family proximity as a proxy - which is why the slate only surfaces Ops candidates, not cross-functional talent. Building the ontology is Phase 2 (month 6). Options: Lightcast (off-the-shelf), SFIA (global standard), homegrown, or AI-inferred from work artifacts. AICAP HR's default recommendation is AI-inferred seeded by self-declaration, audited quarterly.
Month 0-3: Governance design, audit-log spec, HRIS rationalisation (off spreadsheets into a real platform if not already there). Pilot the review drafter with one business unit (Ops, 40 people).
Month 3-6: Expand review drafter to all 263 active employees. Launch calibration pre-read for FY26 cycle. Begin skills ontology selection.
Month 6-9: Skills-per-employee seeded. True skills-based successor slating goes live. Begin flight-risk early-warning system.
Month 9-12: Full portfolio (13 processes from the automation matrix) in production. People-data warehouse joining HR + business outcomes.
AICAP HR scopes the phasing; the engineering layer builds to it through one of the three delivery paths (client engineering, partner SI, or vendor stack).
Vendor-agnostic. AICAP HR helps clients evaluate and select the right tools based on scale, data, regulatory posture, existing stack, and team capacity. No defaults, no kickbacks.
The categories typically in play: an HRIS that supports clean integrations and role-based permissions; a comp platform with live market data; a performance and engagement tool; an AI layer (Claude, ChatGPT, Gemini, or a company-managed alternative) deployed in a way that keeps employee data inside the company; and a connective layer that sits across all of them and keeps an audit log of every AI action.
The vendor mix varies by client. AICAP HR's job is to help you select what fits, design how the pieces talk to each other, and oversee the build - not to push you toward any particular vendor.
~60% reduction in manager + HR hours per review cycle. Every high flight-risk employee (48 today) gets a named owner and a playbook - historically zero of them did. A single retention save at the director level pays for the full tool stack for a year. And the compounding: HR's time redirects from drafting to strategy, which is what the company needs as it scales from 263 to 800 people without 3×-ing HR headcount.
Three failure modes I watch for:
1. Rubber-stamping. Managers sign the AI draft without editing. Counter: require manager edits as a non-skippable field; audit-log diff of what changed.
2. Data staleness. The model reasons over a snapshot that's two weeks old and misses a triggering event. Counter: near-real-time sync for the highest-signal sources (engagement, 1:1s, goal updates).
3. Trust loss on a high-profile error. One visibly biased or wrong AI output becomes the story. Counter: start with internal-only pilots, heavy manager training, and make the bias scan default-visible - not a hidden feature.
Yes. Existing tools add AI features inside their own product: AI-drafted review inside one tool, AI summary inside another. That is useful but narrow.
What AICAP HR designs is the connective tissue across all your HR tools. So a performance rating in one system can trigger a calibration review in another, which flags a retention risk in succession, which opens a comp review. No single vendor does that end-to-end. AICAP HR designs how the tools talk to each other, sets the access rules, and oversees the build by the client's team or a partner. That is the gap a scaling company can own by engaging AICAP HR before they're locked into a single vendor's assumptions.
HR owns the policy, the principles, and the audit. Engineering (the client's team, a partner integrator, or a vendor that provides this out of the box) builds the connective tissue across the tools. CEO and board own the governance question: what is AI allowed to recommend versus decide? That line gets set once, in writing, and every tool lives inside it. AICAP HR is the architect across all three: writes the spec, runs design reviews, oversees the build, audits the output.
I've sketched an org + ownership model in the blueprint deck. Happy to walk through it.
This is the single biggest implementation risk and it is almost always under-designed. Right answer: build permission rules in from day one, so the AI can only ever see the data the user in front of it is allowed to see. Not bolted on later.
The access rules AICAP HR designs, by role:
HR sees everything. Managers see their direct reports plus the people on their succession slate (with rationale, not raw peer comparisons). Employees see their own data, their team's org structure, and their own skills - never peer ratings, peer flight-risk, or peer comp. Executives see aggregated views plus their own personal successor slate. Boards see only aggregated trends with names removed.
AICAP HR designs the access rules and writes the technical pattern that enforces them. The build is delivered by the client's engineering team, a partner systems integrator, or a vendor that already supports this. The choice is made with the client based on what fits. AICAP HR runs design reviews, signs off on the implementation, and audits it against the spec. This is where most consultants gloss; AICAP HR designs it on day one because once you lose employee trust, you can't get it back.
All four are either live or imminent and all four apply to HR.
NYC Local Law 144 (live since July 2023): mandatory annual bias audits on any Automated Employment Decision Tool used for NYC-based hiring, with public disclosure. Penalties from $500 per violation up to $1,500 per day for ongoing violations.
Colorado SB24-205 (live Feb 1, 2026): consumer protection law treating any AI that's a substantial factor in employment decisions as "high risk." Deployers carry duties around impact assessment, risk management, and disclosure.
EU AI Act (high-risk enforcement begins August 2, 2026): recruitment, performance evaluation, promotion, and termination AI all classified as high-risk. Compliance requires risk management systems, data governance, transparency, human oversight, and accuracy / robustness controls.
Canada AIDA (Artificial Intelligence and Data Act, in legislative review): broadly similar high-risk framework expected once it lands. Pre-IPO Canadian companies should design to it now.
AICAP HR designs the architecture to include the audit log, bias scan, human-decision retention, and disclosure language needed for all four. The compliance work isn't a Phase 2 retrofit; it's specified into the architecture from Month 1 and built by the client's engineering team or the chosen vendor stack under AICAP HR oversight.
The data backs this up: 83% of GenAI pilots in 2026 fail to reach production, and the failure is change management, not the model. Only 1 in 4 HR professionals played a leading role in their own company's AI rollout. That's the gap AICAP HR closes. Running global People functions at scale taught me where adoption actually breaks, and how to design around it from day one.
Three operating principles AICAP HR brings to every engagement:
1. HR leads, doesn't follow. The CHRO is the executive sponsor from week 1. Standard AICAP HR practice is not to take engagements where HR is brought in after the tech is bought - the design and adoption work has to happen up front.
2. Manager pilots before scale rollout. Eight to twelve volunteer managers run the new workflow for one cycle. Their feedback rewrites the prompts and the policies. Only then does the system go org-wide. AICAP HR designs the pilot, recruits the cohort, runs the feedback loops; the engineering layer ships the iterations.
3. Visible wins inside 60 days. One or two outputs (the review drafter, the calibration pre-read) ship early and visibly. Managers see hours back on their calendar. That earns the right to do the harder, slower work (skills inference, succession).
Adoption is engineered into the engagement plan from week 1, not bolted on after the tech is live.
Yes, they are. 70% of knowledge workers are using GenAI outside their company's official policy in 2026. Managers are pasting employee names, ratings, and feedback into ChatGPT and Claude right now to draft reviews, prep for tough conversations, or summarise 1:1 notes. The data has already left.
This is the shadow AI problem. The CHRO's job isn't to stop it - that boat has sailed. It's to bring the usage under governance before an incident forces the conversation.
AICAP HR's standard 30-day intake includes a shadow AI inventory: an anonymous survey of what employees are actually using, for what tasks, with what data. AICAP HR designs the survey, analyses the results, and turns the inventory into the policy baseline. From there, the company replaces unofficial AI use with an approved set of tools that the company controls and can audit. AICAP HR designs that set; the engineering team or the chosen vendors build it. People still get the productivity gain; the data stays where it belongs.
Your question not here?
The best questions I get asked aren't in this list yet. Book a consultation and bring yours, or jump to the wrap.
What you just saw, translated to business impact.
Three HR workflows, modeled end-to-end on the AICAP HR HC report in under 7 minutes - workflows that today consume weeks of manager and HR time in most companies, and still produce thinner output.
What this demo showed
- Review drafting that would take hours, completed in 90 seconds - with a bias scan layered in
- Calibration pre-reads that force outlier discussion instead of drifting ratings toward 3
- Succession slates that surface candidates leadership might otherwise overlook - including a live retention risk
- Every AI action paired with what humans still decide - augmentation, not automation-without-judgment
What this demo did NOT show
- A real skills ontology - successor slate uses performance + role signals; skills-based slating is Phase 2
- Fully connected data - today this runs off a snapshot; live deployment connects the client's HRIS, comp, performance, and collaboration feeds
- Governance + audit logs - every AI decision will be logged and reviewable; IPO-grade diligence is built in
- Privacy architecture - employee opt-outs, metadata-only signals, and transparency controls are Phase 1 table stakes
What this saves, in numbers.
Anchored on the model HC data: 263 active employees, 9-manager Ops calibration cohort, 48 high flight-risk employees. Loaded manager cost: $80/hour. Industry-standard replacement cost for mid-level roles: $150K+.
Net first-year case. At a 300-HC company, indicative stack and services run $350K to $700K in year one. Direct cycle-time savings alone come in at $150K to $250K. Add a single retention save or a 50% improvement in time-to-fill on three open roles and the engagement is at break-even or positive in year one.
Year two onward. Ongoing cost is primarily the vendor stack (typically 50 to 70% of year-one cost, since implementation work is done) plus an optional AICAP HR retainer for governance reviews and roadmap updates. Cycle-time savings, retention saves, and hiring velocity gains compound across every subsequent cycle.
What an AICAP HR engagement actually delivers, week by week.
Current-state read
- HR data + tool audit
- Shadow AI inventory across the company
- Regulatory exposure map (NYC, EU, Colorado, Canada)
- CHRO + CEO alignment on scope
Scope + governance
- 13-process matrix scored for this client
- AI governance policy + audit log spec
- Role-based access design
- Pilot scope locked (one BU, one workflow)
Pilot launch
- Review drafter pilot live for 8-12 volunteer managers (engineering layer builds; AICAP HR designs the prompts and policies)
- Bias scan and audit log running to spec
- AICAP HR runs the weekly manager feedback loop
- Prompts and policies revised in-flight
First wins + scale plan
- Manager hours-saved measured and reported by AICAP HR
- Calibration pre-read methodology deployed for FY cycle
- Phase 2 roadmap (skills, succession, retention) authored
- Handoff plan or extension scoped with the client
The AICAP HR engagement
30 minutes to walk through your people-ops landscape, the ROI model, governance principles, and phasing. If the thesis holds up, AICAP HR scopes a 90-day diagnostic or pilot that proves this on one business unit before going company-wide.
Let's compare notes
The AICAP HR blueprint deck + automation matrix workbook are openable from Chapter 2. Happy to walk through the full 13-process portfolio, governance model, and the skills-ontology question. I'd also love to hear what you're seeing on your side.