Langflow Critical RCE (CVE-2026-33017): One Request, Full Takeover
Priya runs a small AI startup in Bengaluru. Her team built a customer support chatbot using Langflow — a popular drag-and-drop tool that lets developers connect AI models, databases, and APIs without writing much code. The chatbot handles thousands of conversations a day. It has access to the company's customer database, their OpenAI API key, and their AWS account.
On March 17, 2026, a security researcher published details of a critical vulnerability in Langflow. Twenty hours later — before any patch was widely deployed, before most teams even knew the vulnerability existed — attackers were already scanning the internet for Priya's chatbot.
They didn't need a password. They didn't need an account. They sent one HTTP request and had complete control of the server.
- CVE: CVE-2026-33017 — Langflow Unauthenticated Remote Code Execution
- CVSS Score: 9.3 Critical — no authentication, no user interaction, remotely exploitable
- Disclosed: March 17, 2026 — first exploitation observed within 20 hours
- CISA KEV added: March 25, 2026 — active exploitation confirmed
- Federal patch deadline: April 8, 2026
- Affected versions: Langflow 1.8.1 and earlier
- Safe version: Langflow 1.9.0 — update immediately
- No authentication required: One HTTP request is enough for full server takeover
- What attackers steal: OpenAI/Anthropic API keys, AWS credentials, database passwords, .env files
- CISA technical impact rating: Total
- This is the second exploited RCE in Langflow — CVE-2025-3248 was exploited in the same codebase in 2025
What Is Langflow — In Plain English
Building an AI application used to require writing thousands of lines of code. Langflow changed that. It gives developers a visual canvas — think of it like a flowchart builder — where you drag and drop AI components and connect them together. Connect an OpenAI model here. Plug in a database there. Add a custom logic step in the middle. Click run. Your AI agent is working.
It has 145,000 stars on GitHub — a measure of how many developers follow and use a project. It is used by startups building AI chatbots, enterprises deploying automated workflows, and data science teams running AI experiments. Because it connects to so many things — AI APIs, cloud accounts, databases — a single compromised Langflow instance is like finding a master key that opens every door in a building.
That is exactly what attackers found on March 17.
Root Cause — The Open Door That Was Designed to Be Open
Langflow has a feature called public flows. The idea is useful: you build an AI workflow and want to share it with someone — a client, a colleague, a user — without forcing them to create an account. You make the flow public, share a link, and they can use it directly.
To make this work, Langflow has an endpoint — think of it as a specific web address the application listens to — that accepts requests from anyone, with no login required. That is by design. Public flows are supposed to be accessible to anyone.
The problem: this open endpoint also accepted instructions about how to run the flow. And those instructions could contain any Python code the sender wanted to include. Langflow would take that code and run it directly on the server — using a Python function called exec() that executes whatever code it is given, with no safety checks, no sandbox, no restrictions.
Imagine a restaurant with a public window where anyone can place an order. That is the feature. The vulnerability is that the kitchen will cook anything written on the order slip — including instructions that say "unlock the back door and hand the keys to the person outside."
The researcher who found this vulnerability — Aviral Srivastava — discovered it by reading how Langflow had fixed a previous vulnerability (CVE-2025-3248) in 2025. That fix added authentication to one endpoint. He noticed the same dangerous exec() call existed on a different endpoint that the previous fix had not touched. Same flaw, different door.
His advice after reporting it: "If you've fixed a vulnerability in your codebase before, go back and check whether the same pattern exists somewhere else. The first fix is rarely the last one needed."
How the Attack Actually Happened — Hour by Hour
Security researchers at Sysdig run a network of decoy servers — fake Langflow instances set up specifically to attract and observe attackers. Here is exactly what they saw after the advisory was published on March 17.
Hour 20 — Automated scanners arrive
The first exploitation attempts came from automated tools — software that continuously scans the entire internet looking for vulnerable applications. Four different attackers sent identical payloads within minutes of each other. No public exploit code existed yet. They had read the advisory and written their own tools from the description alone.
Hour 21 — Real attackers with custom scripts
A second wave arrived using custom Python scripts rather than automated scanners. These were human-operated attacks. One attacker methodically worked through a checklist after gaining access: listed files on the server, read the password file, checked what user the application was running as, then attempted to download a second-stage payload from their own server — additional malware pre-staged and waiting.
Hour 24 — Data harvesting begins
Attackers began specifically targeting .env files and .db files — the files where applications store their secrets. An .env file is like a keychain: it holds API keys for OpenAI, Anthropic, AWS, database passwords, and other credentials the application needs to function. Any Langflow instance with a compromised .env file has effectively handed attackers access to every service it was connected to.
This is what makes Langflow a uniquely valuable target. It is not just the Langflow server that gets compromised. It is every AI service, cloud account, and database that Langflow was configured to access. One server. Every key on the keyring.
Why This Attack Was So Fast
Most vulnerabilities take days or weeks to be weaponised. Attackers need to reverse-engineer the flaw, write working exploit code, test it, and deploy it at scale. CVE-2026-33017 was different for three reasons.
First, the advisory was extremely detailed. The security researcher published a thorough technical explanation — exactly which endpoint, exactly what parameter, exactly how the code executes. This is good practice for transparency, but it also meant attackers had a near-complete recipe.
Second, exploitation is trivially simple. The researcher described it as "extremely easy." One HTTP POST request with a JSON payload containing malicious Python code. No multi-step process, no special tools, no technical expertise beyond basic programming.
Third, Langflow has a default setting called AUTO_LOGIN that, when enabled — which it is by default — means an attacker does not even need to know a public flow's ID in advance. They can create their own public flow, then exploit it. The entire attack from scratch requires no prior knowledge of the target's configuration.
The Pattern — AI Tools Keep Getting Hit
CVE-2026-33017 is the second exploited RCE in Langflow within a year. The same week CISA added this to their catalog, they also added a CVE for the Trivy supply chain attack we covered last week — confirming TeamPCP's attack officially. And earlier this month, n8n — another AI workflow automation tool — disclosed two similar critical vulnerabilities in its own code execution engine.
This is not a coincidence. AI workflow tools share a structural problem: they are built around the idea of executing code as a feature. Connect components, run logic, process data. But the boundary between "run my code" and "run the attacker's code" is thin — and when that boundary breaks, the attacker gets access to everything the tool was connected to.
This connects directly to the supply chain attacks we have been tracking all month:
→ 75 Versions of Trivy Were Poisoned — Check Your Pipeline Now
→ LiteLLM Attack: When a Trusted Update Turns Malicious
Indicators of Compromise (IOCs)
# CVE-2026-33017 Langflow RCE — Detection and Remediation
# Step 1 — Check your version immediately
pip show langflow
# If version is 1.8.1 or below — treat as potentially compromised
# Step 2 — Update to safe version
pip install langflow==1.9.0
# Or via docker:
docker pull langflowai/langflow:1.9.0
# Step 3 — Rotate all credentials on any exposed instance
# Priority order:
# 1. OpenAI / Anthropic / Google AI API keys
# 2. AWS / Azure / GCP credentials
# 3. Database connection strings and passwords
# 4. Any token found in .env files
# Attacker IP addresses observed in active exploitation
83.98.164.238 (active reconnaissance + stage-2 dropper delivery)
# Block at firewall if seen in logs
# Staging/C2 infrastructure to block
173.212.205.251 (stage-2 dropper delivery server)
# Signs of active exploitation in logs
Alert: POST requests to /api/v1/build_public_tmp/ from unknown IPs
Alert: GET /api/v1/auto_login followed immediately by POST to build endpoint
Alert: Outbound connections to unknown IPs from Langflow server process
Alert: File access to /root, /app, /etc/passwd by Langflow process
Alert: .env or .db files accessed by web server process
# Network indicator
Alert: Outbound HTTP to unknown IP on non-standard port
immediately following inbound API request to Langflow
(stage-2 payload delivery pattern)
# If you cannot update immediately
Mitigation: Restrict Langflow to internal network only
— do not expose directly to public internet
Mitigation: Disable public flows feature if not in use
Mitigation: Place behind reverse proxy with authentication
SOC Alert Priorities
Priority 1 — Update to Langflow 1.9.0 immediately. CISA has confirmed active exploitation. Federal deadline is April 8 but attackers are scanning right now — not on April 8. Any internet-facing Langflow instance below 1.9.0 should be treated as already compromised until proven otherwise.
Priority 2 — Rotate every credential stored in your Langflow environment. Attackers specifically target .env files containing API keys and database passwords. Even if you update today, if credentials were exposed during the window since March 17, those keys are potentially in attacker hands. Rotate your OpenAI keys, AWS credentials, and database passwords immediately.
Priority 3 — Check logs for POST requests to /api/v1/build_public_tmp/ from unknown sources. This is the exact endpoint exploited. Any request from an IP that is not your own infrastructure or known users is a potential exploitation attempt. If you find such requests dating back to March 17 or later — assume compromise.
Priority 4 — Never expose Langflow directly to the public internet. This applies after patching too. Langflow should sit behind a reverse proxy with authentication, or be restricted to internal network access only. It has access to too many valuable credentials to be publicly reachable without a layer of protection in front of it.
Priority 5 — Audit what Langflow has access to. List every API key, cloud credential, and database connection configured in your Langflow instance. For each one — does it have more permissions than it actually needs? Reduce permissions now, before the next vulnerability in an AI tool gives an attacker everything it can reach.
The ZyberWalls Perspective
Twenty hours. That is how long it took attackers to build a working exploit from a published advisory and begin scanning the internet.
This is the new timeline. Not weeks. Not days. Hours. And CVE-2026-33017 was not even particularly complex to exploit — one HTTP request, one JSON payload, done. The researcher who found it called it "extremely easy." The CISA assessment says it is automatable and has total technical impact.
But the deeper issue is not this specific vulnerability. It is the pattern it represents. AI workflow tools are being deployed faster than they are being secured. Teams building AI agents are data scientists and developers focused on making things work — not security engineers focused on making things safe. Langflow instances get spun up for demos, for experiments, for proofs of concept — and then they stay running, internet-facing, connected to production credentials, for months or years without anyone thinking about the security implications.
Priya's chatbot — the one handling thousands of customer conversations a day, connected to the company database and AWS account — is not an edge case. It is the typical deployment. And right now, somewhere, a version of that chatbot is still running Langflow 1.8.1.
Update to 1.9.0. Rotate your credentials. And the next time you deploy an AI tool — ask what it is connected to before you ask whether it works.
Stay Alert. Stay Human. Stay Safe.— ZyberWalls Research Team
