Adversary Simulation · AI / LLM Security · Red Team Operations
Crow's Nest Group is a specialized offensive security firm operating at the intersection of advanced adversary simulation and AI/LLM security. We don't run tools and write reports — we emulate real threat actors and assess the attack surface most firms haven't looked at yet.
Why Crow's Nest Group
We specialize in two converging threats: advanced human adversaries and the emerging attack surface created by AI deployments. Most firms can do one. We do both — with the same disciplined, operator-led tradecraft.
We work quietly behind the scenes — as your white-label technical arm or a trusted referral partner — so your client relationships stay intact and your service offering expands without adding headcount.
We emulate the specific adversaries targeting your industry — not generic attack patterns. MITRE ATT&CK-aligned across every engagement.
Reports delivered under your brand. Your client relationship stays yours.
Subcontract, referral, or co-delivery — we fit how you work.
OSCE3, OSCP, GXPN, CISSP, and MS Computer Science — a rare combination of operational depth and academic rigor.
What We Do
Every engagement is scoped, authorized, and executed with one goal: give your clients an honest picture of their risk — and the evidence to act on it.
We emulate the specific tools, tactics, and procedures of the threat actors targeting your industry — from initial access to data exfiltration. This is not commodity pen testing. It is a structured test of whether your defenses hold against the adversaries actually coming for you.
Internal and external assessments that go beyond automated scanning. We identify what's actually exploitable and what the impact would be — then we show you the evidence.
Thorough manual testing to surface logic flaws, authentication weaknesses, and injection vulnerabilities that scanners routinely miss.
We map what's exposed before an attacker does. Available in two formats: ASM Snapshot (one-time assessment) or ASM Watch (monthly recurring monitoring). ASM pairs naturally with a red team engagement — we find the surface, then we attack it.
Simulated phishing operations with full metrics and access reporting — click rates, credential harvesting, and post-click behavior.
Collaborative sessions that strengthen defensive teams by working alongside them. Tabletop exercises help leadership understand risk without a live engagement.
Forensic analysis, breach assessment, and expert testimony for legal proceedings. We provide technically credible, clearly communicated support for plaintiff and defense litigation — translating complex security findings into language that holds up in court.
AI & LLM Security
Organizations are deploying large language models faster than they can assess what they've introduced. Internal chatbots, AI coding assistants, customer-facing LLM applications, and agentic workflows all create attack surfaces that most pen test firms don't know how to evaluate. We do.
Adversarial testing of large language models for prompt injection, jailbreaking, goal hijacking, and data exfiltration via model — using structured methodology, not ad hoc experimentation.
Retrieval-Augmented Generation systems introduce unique attack paths: retrieval poisoning, context manipulation, and indirect prompt injection through external knowledge sources. We test the full pipeline, not just the model.
Agentic AI systems that take actions on behalf of users create new privilege escalation paths and tool misuse vectors. We assess agent architectures for sandbox escapes, privilege escalation, and unintended action execution.
Poisoned training data, malicious fine-tuning, and compromised model weights are emerging threats in enterprise AI pipelines. We assess the integrity of your model sourcing and deployment process.
Fairness, bias, and regulatory alignment assessment for AI deployments — helping organizations meet emerging compliance requirements before regulators come asking.
AI security assessments are available as a standalone engagement or integrated into a broader red team operation. Contact us to scope the right approach for your environment.
Training
Our training programs are designed for individuals, teams, and organizations that want to develop real offensive and defensive security skills — taught by operators who've done the work in the field.
Hands-on training covering core penetration testing concepts, tooling, and methodology — built for individuals pursuing certifications or breaking into offensive security.
Structured curriculum covering adversary simulation, C2 frameworks, lateral movement, and evasion techniques for security teams looking to build internal red team capability.
Collaborative sessions that run attack scenarios in your environment while your defensive team tunes detections in real time. Leaves your SOC stronger after every session.
Role-based awareness programs covering phishing, social engineering, and insider threat — tailored to your organization's risk profile and delivered to staff at all levels.
Scenario-driven exercises for leadership teams to stress-test incident response plans, understand business impact, and build confidence in decision-making under pressure.
Practical training on identifying and exploiting AI system vulnerabilities — prompt injection, RAG pipeline attacks, model abuse, and data leakage — for teams deploying or securing LLM-based applications.
Insights
Practical perspectives on adversary simulation, AI security, and what organizations are missing in their security programs — written by operators, not marketers.
Most enterprises deploying retrieval-augmented generation have no methodology for testing it. Here's how adversaries approach the knowledge base, the retrieval layer, and the model — and how to test all three.
Coming soon
Threat actors are using LLMs for reconnaissance, phishing generation, and exploit development. A red teamer's view of how the attacker workflow is changing and what that means for your defenses.
Coming soon
The difference between a penetration test and an adversary simulation isn't just scope — it's intent, methodology, and what you learn about your real risk. Here's how to tell them apart.
Coming soon
A look at the most common exposures discovered in the first month of ASM monitoring — and why most organizations are surprised by what's already out there.
Coming soon
Most security firms produce reports that don't hold up under legal scrutiny. What expert witness work actually requires — and why the technical-legal translation layer matters.
Coming soon
Hard lessons from advising organizations building in-house red team capability — what the job postings get wrong, what tools actually matter, and when it makes more sense to partner externally.
Coming soon
Key Personnel
Founder & Lead Operator
Ymir Eboras is a U.S. Air Force Cyber Warfare officer with over 20 years of military experience. He has led Offensive Cyber and Electromagnetic Activities, adversary simulation, penetration testing, and security assessments across government and private sector environments. He holds an MS in Computer Science alongside elite offensive security certifications — a combination that represents both the technical depth and academic rigor to assess complex, modern environments, including AI-integrated systems. He founded Crow's Nest Group to deliver that same operational capability to organizations that have outgrown commodity pen testing and need a firm that thinks the way their adversaries do.
About Us
Crow's Nest Group was built on a simple premise: most organizations are getting pen tested, but not tested the way their real adversaries would attack them.
We're a veteran-led offensive security firm with deep roots in military intelligence, red team operations, and adversary emulation. Our operator holds OSCE3, OSCP, GXPN, and CISSP certifications alongside an MS in Computer Science — a combination that represents both the technical depth and the academic rigor to assess complex, modern environments.
We built Crow's Nest to serve organizations that have outgrown commodity pen testing and need a firm that thinks the way their adversaries do — including the adversaries who are now using AI. Being transparent about our structure is a feature, not a limitation: your name is on every engagement, and so is ours.
We operate with a partner-first mindset. Whether you're a GRC firm that needs technical testing to back your compliance work, an MSSP expanding your service catalog, a legal team that needs a credible technical expert, or an insurance broker who needs to verify client controls before policy issuance — we are built to work alongside you.
We only take engagements we're authorized to perform, and we operate within clearly defined rules of engagement.
We run a small number of engagements at a high standard — not a factory. Your engagement gets our full attention.
Our reports are written for action, not for filing. Every finding comes with context, evidence, and a clear path forward.
Get In Touch
No pitch decks, no long procurement cycles. Tell us what you're working on and we'll tell you whether we're the right fit.
GRC Partners: Technical pen test and red team work that gives your compliance assessments real teeth.
MSSP Partners: White-label or referral arrangements for red team, AI security, and ASM work your team doesn't currently offer.
Legal Partners: Expert witness services, forensic support, and breach assessment for litigation.
Insurance Partners: Pre-policy technical control verification and post-incident assessment.
All inquiries are treated with strict confidentiality.