Y Combinator’s move to four cohorts a year has shortened the time between demo days, and that pace is showing up in the security startups arriving in the Fall 2025 cohort. Expect companies that focus on AI-native security tooling, continuous offensive testing, and agent-aware access controls to move fastest from prototype to production.

Two themes jumped out from the batch when I looked through the launches and company pages. First, offensive automation - startups building AI agents that do continuous pentesting and produce exploit proof-of-concepts - is maturing from research demos into integrated developer workflows. Veria Labs is explicit about this approach: integrating into git and CI/CD, running attacks on PRs and staging, and delivering patchable remediation rather than just alerts. That model is valuable for teams that ship many times a day and need security coverage that keeps pace.

Second, the cohort reflects a shift toward protecting the new attack surface around agentic AI. Tools that provide fine-grained, ephemeral, and auditable access for machine agents - effectively zero-trust for programmatic workflows - are appearing alongside tooling to monitor and harden agent behavior. Multifactor, which positions itself as an account-sharing and agent-access control platform, is an example of startups that try to make AI agents subject to the same least-privilege and auditability expectations we ask of human users.

Adjacent to those trends are startups that weaponize automation against the human layer - scalable social engineering simulations that emulate modern voice, email, and multi-vector attacks. GhostEye and similar teams show how rapidly AI is being repurposed to probe human risk at scale, and how the defensive playbook needs to catch up with realistic, chained simulations that reveal how teams actually behave under pressure.

If you run security for an enterprise and want to pilot from this batch, here’s a practical plan I recommend: (1) Start small and measurable - pick a single high-change service or a critical workflow and run a continuous pentest pipeline in staging for 30 days; (2) Pair any third-party offensive automation with your internal red team or an external assurance partner so you get human review of high-impact findings; (3) Require agent-access tools to produce immutable audit logs and short-lived credentials before you let them touch production systems; (4) Measure downstream remediation time and false positive rates - a tool that floods teams with noise will regress security velocity instead of improving it.

Operational cautions matter. AI-driven offensive tools can generate realistic exploits and attack chains. Make sure your scope, blast radius, and safe-landing processes are explicit - run experiments in isolated staging environments, require signed approvals for any live testing, and set escalation paths for critical findings. Likewise, agent gatekeepers are only as good as the policies they enforce. If a policy system lacks clear change control or inspection, it creates a false sense of security.

For security teams evaluating YC alumni from this Fall batch, focus on integration friction, proof-of-effectiveness, and how the startup measures risk reduction. The best early products are the ones that reduce time-to-fix for the developer team and produce audit artifacts that compliance and incident response teams can use. By the time these startups graduate, many will aim to replace manual pentests and brittle permission models. Treat early pilots as engineering experiments - instrument everything, keep human oversight, and iterate fast.

YC’s Fall 2025 cohort is not a finished answer to today’s security problems. It does however give defenders new, practical tools that match the speed and automation of the systems they protect. If you are experimenting with AI agents or shipping code multiple times per day, this cohort is worth watching and piloting with conservative, measurable scopes. Start small, demand auditable results, and let your internal ops teams drive the integration roadmap.