Skip to content

Instantly share code, notes, and snippets.

@azurekid
Created June 30, 2025 09:41
Show Gist options
  • Save azurekid/5893fd192c90ed11d8dda0faefe2eefa to your computer and use it in GitHub Desktop.
Save azurekid/5893fd192c90ed11d8dda0faefe2eefa to your computer and use it in GitHub Desktop.

Foreword

How I Went from Fixing Copiers to Breaking Into Clouds

You know that moment when you realize your career has taken a completely unexpected turn? Mine came when I was sitting in a boardroom at a major insurance company, explaining to executives why their "secure" Azure environment could be compromised in about fifteen minutes. The silence was deafening.

My journey here wasn't linear. I started fixing copiers at Xerox and Ricoh—yeah, those massive machines that somehow always jammed during important presentations. From there, I bounced through Software Development, became a SharePoint Consultant (which prepared me for dealing with impossible problems), worked as an architect, and eventually found myself as a Cloud Security Architect and Security Researcher.

The thing is, every role taught me something different about how organizations really work versus how they think they work. When you're the guy fixing the printer, you see how people actually handle security badges and passwords. When you're building SharePoint solutions, you discover the creative ways users will circumvent any control you put in place. When you're consulting and developing for banks, insurence companies and government agencies, you realize that the bigger the organization, the bigger the gaps between policy and reality.

The Wake-Up Call

A few years back, I started noticing something troubling. Every client engagement, whether it was a 50-person startup or a multinational bank, had the same underlying issues. The technology was getting more sophisticated, but the fundamental problems remained the same—or got worse.

Here's what I kept seeing:

Security is always "next sprint": Project managers would nod along when I mentioned security requirements, then quietly push them to phase two, three, or "the next major release." Sound familiar?

C-level lives in a different reality: I can't count the number of times where executives confidently stated their environment was "fully secure" while I'm looking at a screen showing admin passwords in plain text config files.

We're really good at patching symptoms: As a consultant, you get brought in to solve the immediate fire, not redesign the entire building's fire safety system. You put out the flames, document what really needs to be fixed, then watch the same issues pop up six months later.

Azure is pandora's box: Microsoft's cloud gives you a incredible set of security tools, but it also gives you incredible ways to shoot yourself in the foot. The number of "secure by default" features that organizations immediately disable because they're "too restrictive" is honestly depressing.

But here's the kicker—we've been sold this myth that Azure is "secure by default." It's not. Yes, some services have better defaults now than they used to, but "secure by default" doesn't mean "secure without configuration." I've seen countless Storage Accounts with public read access, Key Vaults accessible from anywhere, and Virtual Machines with RDP open to the internet—all created through the Azure portal with just a few clicks. The defaults might be better than they were five years ago, but they're still not secure enough for production use without proper configuration and ongoing management.

"We trust our people" - The insider threat blind spot: This one drives me crazy. Organizations dismiss insider risk with phrases like "our employees would never do that" or "we hire good people." Meanwhile, their privileged access management looks like Swiss cheese, former employees still have active accounts months after leaving, and contractors have more permissions than they need.

I've seen this play out in devastating ways: A ransomware attack where we suspected a colleague deployed it just to prove how "good" he was at incident response. ADFS servers getting compromised during initial setup because someone forgot to shield the public IP address. God mode credentials handed out to external contractors like party favors—especially back in the SharePoint 2010/2013 days when you needed the keys to the kingdom just to install patches.

The management teams seems to think malicious insiders are the only risk, completely ignoring accidental misuse, compromised accounts, or the fact that people change. Trust is great for company culture, but it's terrible for security architecture.

"Vulnerability management" that's really just broken patch management: Let's be honest about what most "vulnerability management programs" actually are glorified patch management with fancy dashboards. Organizations scan everything, generate massive reports full of CVEs, then spend months arguing about maintenance windows while critical systems remain unpatched. I've watched companies with sophisticated vulnerability scanners get completely owned because they couldn't figure out how to patch a three-year-old Windows server without breaking their legacy applications.

We had a completely different approach to security back in the day—and unfortunately, a lot of people still live in that same mindset. The combination of legacy overload, lack of knowledge, and the "if it ain't broke, don't fix it" mentality creates perfect conditions for attackers.

Why I'm Writing These Attack Stories

This blog series isn't your typical "here's how to configure conditional access" tutorial. I'm done with theoretical security advice that doesn't match what I see in the real world.

Instead, I'm going to walk you through actual attack scenarios based on misconfigurations, oversights, and "temporary" solutions I've encountered during client engagements. Every technique I'll demonstrate, every vulnerability I'll exploit, every privilege escalation path I'll follow—these are all based on real environments I've assessed.

I'll create fictional scenarios featuring attackers with names like "Phantom" who systematically exploit the exact same weaknesses I find during legitimate assessments. The attacks are real, the techniques are documented, the impact is genuine—only the names have been changed to protect the embarrassed.

Why this approach? Because I've learned that telling someone "you should enable MFA" doesn't stick. But showing them how an attacker named Phantom can escalate from anonymous access to Global Admin through a chain of role misconfigurations? That gets their attention.

The BlackCat Connection

Throughout these stories, you'll see references to Project BlackCat, the PowerShell framework I've been developing over the last 9 months. Every time I encounter a new attack vector, discover a useful reconnaissance technique, or find a creative way to chain Azure permissions, I add it to Project BlackCat.

But why create another tool when there are already so many available? Simple: I love PowerShell because it's available on almost every system, and I wanted a module with a small footprint that only requires the Az.Accounts module. Most existing tools feel outdated, and more advanced techniques are available for enumeration and stealth operations that aren't being leveraged.

More importantly, I want to understand every step required to execute these attacks. The best way to learn something is by doing it yourself, not just running someone else's script and hoping it works.

What started as a collection of useful scripts for legitimate assessments has evolved into a comprehensive toolkit that mirrors exactly how modern attackers think about cloud environments. The irony isn't lost on me. The same tools I use to help organizations secure their environments could be used to attack them.

Each story in this series will include the actual BlackCat functions used, the PowerShell commands executed, and the specific misconfigurations exploited. You'll see real attack chains, not sanitized examples that barely work in lab environments.

A Reality Check About Responsibility

Before we dive into these attack scenarios, let's be crystal clear: the techniques I'm sharing are powerful and potentially dangerous. They work because I've tested them in production environments during authorized assessments.

Here's the deal:

  • Only test on systems you own or have explicit written permission to assess
  • These techniques can cause service disruptions if used carelessly
  • Just because something is misconfigured doesn't mean it's fair game
  • Document everything, respect boundaries, and remember there's a human behind every system

I'm sharing this knowledge because security through obscurity doesn't work. Attackers already know these techniques—they're using them right now against organizations that think they're secure. The only way to defend against these attacks is to understand how they work.

What You'll See in These Stories

Over the coming weeks, I'll be publishing attack scenarios that cover:

Real-world Azure misconfigurations I've found in the field: That Storage Account with public read access containing employee data? The service principal with too many permissions? The conditional access policy that doesn't apply to service accounts? I've seen them all.

Attack chains targeting Microsoft's ecosystem: How do you go from anonymous internet access to compromising Entra ID, accessing Azure resources, or pivoting through GitHub repositories and DevOps pipelines? I'll show you the exact steps, using scenarios based on actual client environments.

The human element behind technical failures: Every misconfiguration has a story. The overwhelmed admin who checked "yes" to everything. The project deadline that forced shortcuts. The "temporary" solution that became permanent.

Detection evasion techniques that actually work: Academic papers love to talk about "stealthy" attacks that would trigger every SIEM alert in existence. I'll show you methods that blend in with normal cloud operations.

BlackCat in action: You'll see the exact PowerShell functions, the reconnaissance techniques, and the exploitation methods I've developed through years of authorized testing.

Why This Matters to You

If you're a security professional, these stories will help you understand what you're really defending against. Not theoretical attackers with unlimited budgets and zero-day exploits, but methodical adversaries who understand your environment better than you do.

If you're an IT administrator, these scenarios will show you exactly where to focus your security efforts. Spoiler alert: it's probably not where you think.

If you're in management, these stories will help you understand why your security team keeps asking for things that seem to slow down business operations.

The Journey Continues

Looking back at my path from fixing copiers to finding critical cloud vulnerabilities, I realize every step prepared me for this. Understanding how organizations really work, how people actually behave under pressure, and how technology gets implemented in the real world—that's what makes these attack scenarios so effective.

Security isn't about perfect configurations in lab environments. It's about understanding that every organization is run by humans who make mistakes, take shortcuts, and sometimes just need to get things working by Friday afternoon.

The attackers I'll introduce you to in these stories? They understand this too. And that's what makes them dangerous.

Let's dive in and see what they're really capable of.


Rogier
The guy who went from fixing printers to breaking into clouds
June 2025

P.S. - If you work at any of the organizations I've consulted for and think you recognize your environment in these stories... you probably do. But don't worry—the names have been changed to protect the embarrassed. Your secrets are safe with me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment