Adversary Simulation  ·  AI / LLM Security  ·  Red Team Operations

Adversary Simulation & AI Security —
For Organizations That
Need to Know the Truth.

Crow's Nest Group is a specialized offensive security firm operating at the intersection of advanced adversary simulation and AI/LLM security. We don't run tools and write reports — we emulate real threat actors and assess the attack surface most firms haven't looked at yet.

Adversary Emulation · Penetration Testing · AI / LLM Security · Attack Surface Management · Litigation Support

Two converging threats. One firm that can test both.

We specialize in two converging threats: advanced human adversaries and the emerging attack surface created by AI deployments. Most firms can do one. We do both — with the same disciplined, operator-led tradecraft.

We work quietly behind the scenes — as your white-label technical arm or a trusted referral partner — so your client relationships stay intact and your service offering expands without adding headcount.

Threat-Actor-Specific Emulation

We emulate the specific adversaries targeting your industry — not generic attack patterns. MITRE ATT&CK-aligned across every engagement.

White-Label Ready

Reports delivered under your brand. Your client relationship stays yours.

Flexible Models

Subcontract, referral, or co-delivery — we fit how you work.

Elite Credentials

OSCE3, OSCP, GXPN, CISSP, and MS Computer Science — a rare combination of operational depth and academic rigor.

Purpose-Built Offensive Security.
Delivered with Precision.

Every engagement is scoped, authorized, and executed with one goal: give your clients an honest picture of their risk — and the evidence to act on it.

Core

Adversary Simulation / APT Emulation

We emulate the specific tools, tactics, and procedures of the threat actors targeting your industry — from initial access to data exfiltration. This is not commodity pen testing. It is a structured test of whether your defenses hold against the adversaries actually coming for you.

Core

Network Penetration Testing

Internal and external assessments that go beyond automated scanning. We identify what's actually exploitable and what the impact would be — then we show you the evidence.

Core

Web Application & API Security Testing

Thorough manual testing to surface logic flaws, authentication weaknesses, and injection vulnerabilities that scanners routinely miss.

Intelligence

OSINT & Attack Surface Management

We map what's exposed before an attacker does. Available in two formats: ASM Snapshot (one-time assessment) or ASM Watch (monthly recurring monitoring). ASM pairs naturally with a red team engagement — we find the surface, then we attack it.

Awareness

Phishing Campaigns

Simulated phishing operations with full metrics and access reporting — click rates, credential harvesting, and post-click behavior.

Collaborative

Purple Teaming & Tabletop Exercises

Collaborative sessions that strengthen defensive teams by working alongside them. Tabletop exercises help leadership understand risk without a live engagement.

Legal

Litigation Support & Expert Witness

Forensic analysis, breach assessment, and expert testimony for legal proceedings. We provide technically credible, clearly communicated support for plaintiff and defense litigation — translating complex security findings into language that holds up in court.

The Attack Surface Most Firms
Haven't Looked at Yet.

Organizations are deploying large language models faster than they can assess what they've introduced. Internal chatbots, AI coding assistants, customer-facing LLM applications, and agentic workflows all create attack surfaces that most pen test firms don't know how to evaluate. We do.

AI Red Team

LLM Red Teaming

Adversarial testing of large language models for prompt injection, jailbreaking, goal hijacking, and data exfiltration via model — using structured methodology, not ad hoc experimentation.

AI Red Team

RAG Pipeline Assessment

Retrieval-Augmented Generation systems introduce unique attack paths: retrieval poisoning, context manipulation, and indirect prompt injection through external knowledge sources. We test the full pipeline, not just the model.

AI Red Team

AI Agent Security

Agentic AI systems that take actions on behalf of users create new privilege escalation paths and tool misuse vectors. We assess agent architectures for sandbox escapes, privilege escalation, and unintended action execution.

AI Red Team

Model Supply Chain Risk

Poisoned training data, malicious fine-tuning, and compromised model weights are emerging threats in enterprise AI pipelines. We assess the integrity of your model sourcing and deployment process.

AI Red Team

AI Ethics & Compliance Review

Fairness, bias, and regulatory alignment assessment for AI deployments — helping organizations meet emerging compliance requirements before regulators come asking.

Flexible

Engagement Options

AI security assessments are available as a standalone engagement or integrated into a broader red team operation. Contact us to scope the right approach for your environment.

Build the Skills That Matter.
Delivered by Practitioners.

Our training programs are designed for individuals, teams, and organizations that want to develop real offensive and defensive security skills — taught by operators who've done the work in the field.

Individual

Offensive Security Fundamentals

Hands-on training covering core penetration testing concepts, tooling, and methodology — built for individuals pursuing certifications or breaking into offensive security.

Team

Red Team Operations Training

Structured curriculum covering adversary simulation, C2 frameworks, lateral movement, and evasion techniques for security teams looking to build internal red team capability.

Team

Purple Team Workshops

Collaborative sessions that run attack scenarios in your environment while your defensive team tunes detections in real time. Leaves your SOC stronger after every session.

Client

Security Awareness Training

Role-based awareness programs covering phishing, social engineering, and insider threat — tailored to your organization's risk profile and delivered to staff at all levels.

Client

Executive Cyber Tabletop

Scenario-driven exercises for leadership teams to stress-test incident response plans, understand business impact, and build confidence in decision-making under pressure.

Emerging

AI / LLM Security Training

Practical training on identifying and exploiting AI system vulnerabilities — prompt injection, RAG pipeline attacks, model abuse, and data leakage — for teams deploying or securing LLM-based applications.

Thinking From the Field.

Practical perspectives on adversary simulation, AI security, and what organizations are missing in their security programs — written by operators, not marketers.

AI Security

How to Red Team Your RAG Pipeline

Most enterprises deploying retrieval-augmented generation have no methodology for testing it. Here's how adversaries approach the knowledge base, the retrieval layer, and the model — and how to test all three.

Coming soon

Threat Intel

AI-Augmented Adversaries: What Your SOC Isn't Prepared For

Threat actors are using LLMs for reconnaissance, phishing generation, and exploit development. A red teamer's view of how the attacker workflow is changing and what that means for your defenses.

Coming soon

Buyer Education

Why Your Annual Pen Test Isn't a Red Team

The difference between a penetration test and an adversary simulation isn't just scope — it's intent, methodology, and what you learn about your real risk. Here's how to tell them apart.

Coming soon

ASM

Attack Surface Management for Mid-Market: What We Find in the First 30 Days

A look at the most common exposures discovered in the first month of ASM monitoring — and why most organizations are surprised by what's already out there.

Coming soon

Legal

The Litigation Support Gap: What Security Firms Get Wrong in Breach Cases

Most security firms produce reports that don't hold up under legal scrutiny. What expert witness work actually requires — and why the technical-legal translation layer matters.

Coming soon

Leadership

Building an Internal Red Team: What Works, What Doesn't

Hard lessons from advising organizations building in-house red team capability — what the job postings get wrong, what tools actually matter, and when it makes more sense to partner externally.

Coming soon

The People Behind the Missions.

Ymir Eboras

Founder & Lead Operator

Ymir Eboras is a U.S. Air Force Cyber Warfare officer with over 20 years of military experience. He has led Offensive Cyber and Electromagnetic Activities, adversary simulation, penetration testing, and security assessments across government and private sector environments. He holds an MS in Computer Science alongside elite offensive security certifications — a combination that represents both the technical depth and academic rigor to assess complex, modern environments, including AI-integrated systems. He founded Crow's Nest Group to deliver that same operational capability to organizations that have outgrown commodity pen testing and need a firm that thinks the way their adversaries do.

OSCE3 OSCP CISSP GXPN MS CS
🔗 LinkedIn Profile

Built by Operators.
Focused on Outcomes.

Crow's Nest Group was built on a simple premise: most organizations are getting pen tested, but not tested the way their real adversaries would attack them.

We're a veteran-led offensive security firm with deep roots in military intelligence, red team operations, and adversary emulation. Our operator holds OSCE3, OSCP, GXPN, and CISSP certifications alongside an MS in Computer Science — a combination that represents both the technical depth and the academic rigor to assess complex, modern environments.

We built Crow's Nest to serve organizations that have outgrown commodity pen testing and need a firm that thinks the way their adversaries do — including the adversaries who are now using AI. Being transparent about our structure is a feature, not a limitation: your name is on every engagement, and so is ours.

We operate with a partner-first mindset. Whether you're a GRC firm that needs technical testing to back your compliance work, an MSSP expanding your service catalog, a legal team that needs a credible technical expert, or an insurance broker who needs to verify client controls before policy issuance — we are built to work alongside you.

01

Integrity First

We only take engagements we're authorized to perform, and we operate within clearly defined rules of engagement.

02

Quality Over Volume

We run a small number of engagements at a high standard — not a factory. Your engagement gets our full attention.

03

Results That Matter

Our reports are written for action, not for filing. Every finding comes with context, evidence, and a clear path forward.

Let's Talk About What You Need.

No pitch decks, no long procurement cycles. Tell us what you're working on and we'll tell you whether we're the right fit.

GRC Partners: Technical pen test and red team work that gives your compliance assessments real teeth.

MSSP Partners: White-label or referral arrangements for red team, AI security, and ASM work your team doesn't currently offer.

Legal Partners: Expert witness services, forensic support, and breach assessment for litigation.

Insurance Partners: Pre-policy technical control verification and post-incident assessment.

All inquiries are treated with strict confidentiality.