
Security researchers at Qualys have discovered two new vulnerabilities in popular Linux core dump handlers that could let local attackers extract sensitive data—including password hashes—from crashed programs. The flaws, tracked as CVE-2025-5054 and CVE-2025-4598, affect default installations of widely used tools Apport and systemd-coredump, putting millions of systems at risk. By exploiting a race condition in how these handlers process memory dumps, a low-privilege user could quietly loot valuable information without setting off alarms.
The Crash Course on Core Dumps and Security Risk
To understand the danger, it helps to know what these tools actually do. When a Linux program crashes, it often generates a core dump—a full snapshot of the program’s memory at the time of failure. This dump is then handed off to tools like Apport (used by Ubuntu) or systemd-coredump (used by Red Hat, Fedora, and others), which store and manage the file for debugging.
These tools are meant to help developers diagnose problems. But they also create a potential goldmine for attackers. Because core dumps capture a live view of memory, they often include sensitive information: password hashes, authentication tokens, encryption keys, even pieces of confidential data.
Securing that information has long been a problem. Many crash handlers still inherit the privileges of the crashed process and write dumps to shared locations. Default settings often leave these files more exposed than they should be. “[The discoveries] expose how engineers have placed legacy debug tools inside modern production images without redesign,” said Jason Soroko of Sectigo. In other words, developers are still treating core dumps like harmless logs when in reality, they can be as sensitive as any database or config file.
CVE-2025-5054 and CVE-2025-4598 Explained
The two newly discovered flaws—CVE-2025-5054 in Apport and CVE-2025-4598 in systemd-coredump—take advantage of a classic software bug: a race condition. In both cases, a local attacker can manipulate the timing of events during a program crash to intercept or alter the way core dumps are handled. This allows them to extract sensitive memory contents from the dump before the system can lock it down or restrict access.
The vulnerabilities affect a wide range of systems out of the box. CVE-2025-5054 impacts all versions of Ubuntu from 16.04 through 24.04. CVE-2025-4598 hits default installations of Red Hat Enterprise Linux 9 and 10, as well as Fedora 40 and 41. That’s a wide swath of production environments still relying on default core dump behavior.
To prove the point, researchers at Qualys used the vulnerabilities to extract password hashes from /etc/shadow—a protected file normally accessible only by root—by crashing a setuid program called unix_chkpwd. The crash triggered a core dump, which the attacker then hijacked using the race condition. The result: password hashes exposed without the need for elevated privileges or network access.
It’s a low-noise, high-reward attack. “A local low privilege user can wait for any SUID process to crash, then race the handler and loot hashes without tripping network detection,” Soroko said.
The Bigger Threat: Beyond Password Hashes
Password hashes are just the start. If sensitive data is in memory at the time of a crash, there’s a good chance it’ll end up in the dump.
In the wrong hands, that kind of exposure can grind operations to a halt. It can also trigger serious legal trouble. Regulations like GDPR, HIPAA, and others treat any leak of personal or sensitive information as a reportable event. If a core dump exposes customer data or private keys, even without external compromise, organizations may still be on the hook for a breach.
Why These Issues Persist Despite Modern Defenses
These kinds of bugs are still happening in part because core dump behavior hasn’t kept up with modern security expectations. Many systems still rely on default or legacy configurations that prioritize debugging convenience over protection. Core dumps are often enabled by default, stored in predictable locations, and written with broad permissions.
Even when fixes are released, they don’t always reach the systems that need them. Patch adoption is slow, especially in environments where admins rely on user-installed or third-party packages. Some distributions, like Debian, don’t even install core dump handlers by default, which leads users to manually add tools like Apport or systemd-coredump, often without tightening permissions or adjusting settings.
The deeper issue is a common one in security: the gap between policy and practice. Organizations may have rules around protecting sensitive data, but those often don’t extend to crash logs or debugging tools. The result is a blind spot that quietly expands as systems grow more complex and attackers grow more resourceful.
What Enterprises Must Do Now
There’s no reason to leave these vulnerabilities hanging. The first step is simple: patch every affected system immediately. Updates are already available for Apport and systemd-coredump, and applying them should be a top priority for any organization running Ubuntu, RHEL, or Fedora.
But patching alone isn’t enough. Enterprises also need to audit their SUID and SGID programs—those with elevated privileges—and limit their use to only what’s absolutely necessary. These programs are the most attractive targets for this kind of attack because their crashes generate the most sensitive core dumps.
Next, take control of how your systems handle those dumps. Lock down core dump directories, enforce strict PID validation to make sure handlers aren’t spoofed, and set tight access controls so only authorized users can interact with crash files.
Finally, treat core dump abuse like any other threat. Set up real-time monitoring to detect unusual use of crash handlers or access to dump files. These tools have flown under the radar for too long. It’s time to bring them into the security spotlight.
Rethinking Core Dump Hygiene in a Zero Trust World
In a Zero Trust environment, every access point is treated as suspect, yet crash diagnostics often get a pass. That needs to change. Production systems should not expose raw memory just because something crashes. It’s time to treat core dumps as sensitive assets, not just developer tools.
One step forward is isolation. Debugging tools should be kept out of production wherever possible. If live debugging is necessary, it should happen in dedicated namespaces, containers, or staging environments, away from real user data.
Organizations should also treat dump files like regulated data: encrypt them in transit and at rest and shred them quickly after analysis. Don’t let them sit on disk waiting to be discovered.
And when incidents do happen, response teams need to treat crash data with the same care as compromised credentials. Secure handling, limited access, and rapid cleanup should be part of every playbook.
Core dumps aren’t going away, but the way we deal with them has to evolve. In a world where trust is never assumed, crash diagnostics can’t be the weak link.