
Articles from www.theregister.com
Updated: 2 hours 26 min ago
Mon, 27/04/2026 - 22:29
Jer (Jeremy) Crane, the founder of automotive SaaS platform PocketOS, spent the weekend recovering from a data extinction event caused by the company's AI coding agent in less than 10 seconds. Not one to let a crisis go to waste, Crane wrote up a post-mortem of the deletion incident in a social media post that tests the saying, "there's no such thing as bad publicity." "[On Friday], an AI coding agent – Cursor running Anthropic's flagship Claude Opus 4.6 – deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider," he explained. "It took 9 seconds." According to Crane, the Cursor agent encountered a credential mismatch in the PocketOS staging environment and decided to fix the problem by deleting a Railway volume – the storage space where the application data resided. To do so, it went looking for an API token and found one in an unrelated file. The token had been created for adding and removing custom domains through the Railway CLI but was scoped for any operation, including destructive ones. This is evidently a feature when it should be a bug. According to Crane, that token would not have been stored if the breadth of its permissions was known. The AI agent used this token to authorize a curl command to delete PocketOS's production volume, without any confirmation check, while also erasing the backup because, as Crane noted, "Railway stores volume-level backups in the same volume." We pause here to allow you to shake your head in disbelief, roll your eyes, or engage in whatever I-told-you-so ritual you prefer. The lessons exemplified by AWS's Kiro snafu and by developers using Google Antigravity and Replit will be repeated until they've sunk in. Railway CEO Jake Cooper responded to Crane's post by saying that the deletion should not have happened and then by saying that's expected behavior. "[W]hile Railway has always built 'undo' into the platform (CLI, Dashboard, etc) as a core primitive, we've kept the API semantics inline with 'classical engineering' developer standards," he wrote. "... As such, today, if you (or your agent) authenticate, and call delete, we will honor that request. That's what the agent did ... just called delete on their production database." Crane told The Register in an email that he was extremely grateful Cooper stepped in on Sunday evening, helped restore his company's data within an hour, and placed further safeguards on the API. In an email to The Register, Cooper from Railway said, "We maintain both user backups as well as disaster backups. We take data very, VERY seriously. This particular situation was a 'rogue customer AI' granted a fully permissioned API token that decided to call a legacy endpoint which didn't have our 'Delayed delete' logic (which exists in the Dashboard, CLI, etc). We've since patched that endpoint to perform delayed deletes, restored the users data, and are working with Jer directly on potential improvements to the platform itself (all of which so far were currently in active development prior to the events)." That just leaves the blame. "No blaming 'AI' or putting incumbents or gov't creeps in charge of it – this shows multiple human errors, which make a cautionary tale against blind 'agentic' hype," observed Brave Software CEO Brendan Eich. Nonetheless, Crane calls out "Cursor's failure" – marketing safety despite evidence to the contrary – and "Railway's failures (plural)" – an API that deletes without confirmation, storing backups on the production volume, and root-scoped tokens, among other things – without much self-flagellation. Called out about this, Crane insisted there's mea culpa in the mix, but added he also wants accountability from infrastructure providers. "Our core thesis stands," Crane said in his email. "Yes our responsibility was the unknown exposure to a production API key (Railway doesn't currently allow restrictions on keys). "But, still a cautionary tale and discovery of tooling and infrastructure providers. The appearance of safety (through marketing hyperbole) is not safety. And when we pay for those services and they are not really there, it is worth an oped. We are building so fast these things are going to keep happening." Nonetheless, Crane said, he's still extremely bullish on AI and AI coding agents, a stance that's difficult to reconcile with his interrogation of Opus, wherein the model describes how it ignored Cursor's system-prompt language and PocketOS's project rules: Opus in its Cursor harness flatly admits its errors – not that it means anything given the model's inability to learn from its mistakes and to feel remorse that might constrain future destructive action. Crane said he believes companies involved in AI understand these risks and are actively working to prevent them. "Even when they put in safeguards, it can still happen," he said. "Cursor had a similar issue about nine months ago, and there was a lot of publicity. They built a lot of tooling to force agents to run certain commands through humans, but they did not apply it here, and it still went off the rails, which happens from time to time with these AIs." Crane said he believes the benefits outweigh the risks. "As a software developer, I've been doing this for 15 years, so I'm not some vibe coder who picked it up in the last few months," he said. "The velocity at which you can create good code with the right instructions and tooling is unparalleled. If you understand systems, the ability to work with codebases you don't personally know but can still understand has also been unparalleled." This introduces novel risks, he said. "Railway's defense has always been that an API key should only be accessed by a human, which is true and has always been the case," he explained. "Now, when a computer is in control and you do not know what it is doing, what happens?" Crane emphasized how helpful Railway's CEO has been through this process and said he has about 50 services running there. "These are the challenges we face as we move faster and faster in software development, with AI, and the tooling is trying to keep up as fast as it can," he said. "I like using the word 'tooling' because, in my view, it reflects the challenges we face today, much like the early days of the dot-com era. Back then, websites would crash, database data would be lost, and there were hardware and networking issues. Those were the technical hurdles of that time. These are the challenges of our era." What to take from this data deletion and resurrection? According to Cooper, it's a market opportunity. "There's a massive, massive opportunity for 'vibecode safely in prod at scale' 1B+ developers who look like [Jer Crane], don't read 100 percent of their prompts, and want to build are coming online. For us toolmakers, the burden of making bulletproof tooling goes up. We live in exciting times." ®
Mon, 27/04/2026 - 18:53
Digital intruders recently broke into two major tech suppliers - utility-technology firm Itron and medical-device maker Medtronic - according to filings with federal regulators. Itron, in a late Friday US Securities and Exchange Commission (SEC) filing, said it was notified about the unauthorized third-party break-in on April 13. The $4 billion company that provides smart meters, sensors, and software for energy, water, and city management said it alerted law enforcement and worked with external cybersecurity advisors to investigate the intrusion. "The Company took action to remediate and remove the unauthorized activity and has not observed any subsequent unauthorized activity within its corporate systems," according to Itron's 8-K report. "Further, no unauthorized activity was observed in the customer hosted portion of its systems." The breach didn't affect Itron's operations, the disclosure said, adding that "Itron currently expects that a significant portion of its direct costs incurred relating to the incident will be reimbursed by its insurers." Itron declined to answer our questions about the breach, including how criminals gained initial access to its systems and whether they deployed ransomware or made an extortion demand. Meanwhile, in a Friday disclosure and SEC filing, med-tech firm Medtronic said an "unauthorized party accessed data in certain Medtronic corporate IT systems." Medtronic's breach disclosure follows ShinyHunters' claims that the data-theft-and-extortion crew broke into the medical device business and compromised "over 9M records containing PII and other terabytes of internal corporate data." ShinyHunters set an April 21 deadline for the company to pay an undisclosed extortion demand, or see its stolen data leaked. Medtronic did not immediately respond to The Register's inquiries about the breach. The $107 billion company didn't say when the breach occurred, but noted the intrusion did not impact its "products, patient safety, connections to our customers, our manufacturing and distribution operations, our financial reporting systems or our ability to meet patient needs." Medtronic says its corporate IT network remains separate from the product, manufacturing, distribution, and hospital-customer networks. "We are working to identify any personal information that may have been accessed and will provide notifications and support services as needed," the company posted on its website. In March, another med-tech company Stryker said a cyberattack - linked by researchers to an Iran-aligned crew with ties to the country's intelligence agency - disrupted its global network, snarling ordering and shipping systems for nearly three weeks. On April 1, the company said it is "fully operational across our global manufacturing network." ®
Mon, 27/04/2026 - 18:53
Itron, Medtronic disclose breaches in Friday filings
Digital intruders recently broke into two major tech suppliers - utility-technology firm Itron and medical-device maker Medtronic - according to filings with federal regulators.…
Mon, 27/04/2026 - 14:03
Space Force awards 11 firms prototype deals to build orbital interceptors
The United States Space Force (USSF) has awarded eleven companies contracts to develop space-based interceptors for President Trump's Golden Dome program, in agreements worth up to $3.2 billion.…
Mon, 27/04/2026 - 13:22
Cybersecurity professionals were the most overlooked workers in IT when it came to pay rises in 2025, according to new figures from recruiter Harvey Nash. The trend was especially stark in the UK, where 77 percent of all security staff saw no salary increase, although the pattern was observed globally too with 71 percent of infoseccers experiencing wage stagnation. For context, 45 percent of all tech workers received pay rises across the 53 countries surveyed, and even DevOps - the most generously rewarded discipline - only reached 56 percent. More than half of those working in adjacent disciplines, including infrastructure, AI/ML, and product management, received wage increases. The pay squeeze is taking a toll: security professionals now rank in the bottom three for overall workplace satisfaction alongside QA testers and infrastructure bods - despite cybersecurity being in the top-three most in-demand positions across the tech industry. Ankur Anand, CIO at Harvey Nash, the IT recruitment biz which gathered the latest data, told The Register that security salaries are stagnating because successful teams are breeding complacency at the board level. "Cybersecurity has become a victim of its own effectiveness," he said. "When teams do their job well, the absence of incidents leads to complacency at senior levels. "At the same time, AI is expanding the threat surface and increasing the volume, speed, and complexity of what security teams have to deal with. When you layer that onto constant pressure, legacy technology, and highly distributed working models, you end up with a workforce carrying huge responsibility with limited recognition. That combination is a powerful driver of burnout and attrition." That boardroom complacency sits awkwardly alongside warnings from security authorities. The UK's National Cyber Security Centre reported a 50 percent rise in its most severe attack category less than a year ago, and data from Check Point, Fortinet, and a January World Economic Forum report all point in the same direction: threats are mounting. The salary data also comes during a period of instability in the cybersecurity job market, with full-time job opportunities starting to plummet due to global economics and technological innovations, like AI, erasing entry-level positions. Cybersecurity, like many other industries, is now in an employer-controlled job market – a far cry from the skills-gap panic of recent years. The mood is visible in why people are staying put: 56 percent cite genuine job satisfaction, but 24 percent admit they're simply not confident they'd find anything better right now. Anand concluded: "The data should be a wake-up call. We're asking cybersecurity teams to stand on the front line of business risk, yet too often we're not matching that responsibility with the reward, progression, and operating environment that keeps people in the profession. "When pay lags the market, workload keeps rising, and the role is seen as a blocker rather than an enabler, it's no surprise that attrition starts to look like the path of least resistance. "If organizations want to reduce exposure and respond faster when incidents happen, they need to treat cyber talent as a strategic capability: valued, visible, and supported by leadership. The organizations that get this right won't just retain their best people – they'll build trust with customers, regulators, and their own boards." ®
Mon, 27/04/2026 - 13:22
Global recruitment giant says 71% of human firewalls saw wages stagnate last year as threats and responsibilities grew
Cybersecurity professionals were the most overlooked workers in IT when it came to pay rises in 2025, according to new figures from recruiter Harvey Nash.…
Mon, 27/04/2026 - 12:34
A home security biz getting digitally burgled is not a great look - but that's exactly where ADT finds itself. The company has confirmed a cyber intrusion following an extortion attempt by the ShinyHunters crew, which claims to have made off with more than 10 million records. US-based ADT is one of the world's largest providers of monitored home alarm systems, selling everything from burglar alarms and cameras to smart home kits, all pitched on keeping unwanted visitors out. On Friday, the company said it detected "unauthorized access" on April 20, shut it down, and brought in outside incident responders, with law enforcement looped in. According to ADT, the intruder made off with a "limited set" of data covering names, phone numbers, and addresses, with a smaller slice including dates of birth and the last four digits of Social Security or tax ID numbers. No payment data was accessed, it said, and the firm was keen to stress that customer security systems were not touched. That's the official version. ShinyHunters, meanwhile, is telling a rather different story. In a post on its dark web leak site, seen by The Register, the crew claims it lifted "over 10M Salesforce records containing PII and other internal corporate data" and is now airing the lot after talks with ADT went nowhere. "The company failed to reach an agreement with us despite our incredible patience, all the chances and offers we made," the group said. "They don't care." The mention of Salesforce hints at a possible SaaS foothold rather than someone fiddling with alarm panels. While ADT has yet to confirm how the intruders gained access, it said in a separate 8-K filing [PDF] that attackers accessed "certain cloud-based environments." There is, to put it mildly, a gap between "limited set" and "10 million records." Companies tend to define incidents as tightly as possible, while crooks tend to do the opposite. The truth usually lands awkwardly in between. Have I Been Pwned has now put a number on it, listing 5.5 million unique email addresses, a number that sits far nearer "millions" than ADT's version of events. ShinyHunters recently made similar claims about cruise company Carnival Corporation, complete with talk of failed negotiations and a looming data dump. ADT has not yet responded to questions from The Register about how it was compromised, how many people were affected, whether customers outside the US are involved, or whether it has filed breach notifications with state attorneys general. For a company built on keeping intruders out, this one has already got inside the front door. Whether it also cleaned out the filing cabinets is the part still being argued over. ®
Mon, 27/04/2026 - 12:34
Security giant says attackers grabbed 'limited set' of data. Crooks claim 10 million records
A home security biz getting digitally burgled is not a great look - but that's exactly where ADT finds itself. The company has confirmed a cyber intrusion following an extortion attempt by the ShinyHunters crew, which claims to have made off with more than 10 million records.…
Mon, 27/04/2026 - 12:19
Keep the patches away for as long as you like
Microsoft has devised a solution to the problem of Windows Updates that break customer devices – users are now able to pause them for as long as they like.…
Mon, 27/04/2026 - 10:35
UK’s data watchdog confirms its boss has been off the job since February while an HR investigation runs
The UK's data watchdog is without its chief after John Edwards stepped aside from the Information Commissioner's Office while an independent workplace investigation examines unspecified HR matters.…
Mon, 27/04/2026 - 09:30
OPINION In retrospect, calling it Mythos made it a hostage to fortune. Anthropic may have hoped that the name implied its AI code security model had mythical god-like powers, but there's an alternate reading. Another definition for Mythos is a set of beliefs of obscure origin which are incompatible with reality. That reality is trickling in, and it’s looking less mythical, more typical. Mythos is a great tool that can automate a lot of the things expert humans do, and it’s the expert humans who get the most from it. It is very good at finding classes of vulnerability that humans know about, while not finding ones that they don’t. Training, amirite? Project Glasswing, limiting early use to trusted partners with a real need, is probably a responsible approach to using its powers for good, but other unrestricted models are quite good at this too. Some hype, some truth, LLMs gonna LLM. It is cynical to say the only real innovation is an AI company operating ethically. Equally cynical is seeing the closed roll-out and the attendant publicity as merely an exercise in hype. It is more constructive, arguably more accurate, and certainly more exciting, to take all this as an early glimpse of a better future. One where the threat landscape stops being a function of geological and climactic forces we can’t control, turning instead into one cultivated, controlled and gratifyingly anti-climactic. Two propositions point the way. One is that the effectiveness of tools like Mythos will continue to evolve, exposing more and more structural and individual code flaws. The other, that these tools will inevitably become generally available. How quickly and cheaply may be controllable, but the outcome is inevitable. There are no long-term secrets in IT. Right now, and for some time to come, most running code has been written in the pre-industrial age of vulnerability detection. Eyeballs, not AI balls, did the work. This is a bad public environment to dump roaming packs of implacable vuln-hunting robots. If they come too soon, it’ll be messy. And they are coming. But if we survive that transition intact, then let the robots roam at will. There is one class of code that is guaranteed to present no security risks whatsoever, and that’s undeployed code. New code has a lot of problems, some caught before deployment and some that aren’t, but never an infinite number. Where truly excellent tools exist, code can be made truly excellent before release. It doesn’t matter if the same tools are available to the bad guys thereafter. A good model, and cited often, is aviation safety. At the beginning of the jet age, new airliners had structural and mechanical faults that made them fall out of the sky. Over time, not only did design and material knowledge improve, but the engineering and regulatory disciplines evolved alongside. Now, we still have crashes, but they are inevitably traceable to things that could and should be done right, but weren't. There’s no new undiscovered class of failure waiting in the wings. It is highly unlikely that code is anything different — after all, we’ve been doing it precisely as long as we’ve been flying jets. Just fixing code vulnerabilities doesn’t fix security, in the same way that knowing how to make and fly exquisitely safe aircraft stops fuel contamination, flocks of geese, or foolish humans from creasing the things. It does help immensely, though. Looking at exploits based on long chains of known and unknown vulns shows how flakey code can be, but it also shows how removing just one of those bugs shuts down the entire attack. The Swiss cheese model of failure works less and less well the more the cheese tends to cheddar. As for the holes outside the code, the supply chain exploits, the special engineering, the straightforward inside sabotage job, to the extent that we can encode, model and train on them, they too will be amenable to the inexhaustible patience of the inference engines. And while huge swathes of enterprise infrastructure continue to run old, unpatched or misconfigured systems, it’ll be like flying on aircraft from the Age of Death. There’s no IT equivalent of the FAA with the power to ground that which should never be flying, much as that would be a fun counter-factual. This too shall pass. There is no way that a tool which catches vulnerabilities by the hundred does not make old code safer, new code so much more so. It will be most interesting to see how the tools for finding flaws evolve alongside the techniques for designing, factoring and writing code for inherent strength. Nobody should expect the way things are now to be the most efficient, least expensive way there is. Nor should anyone expect human expertise to fall out of use. The fact that so many aviation safety issues revolve around human failure shows how intrinsic humans still are in design, construction, maintenance and operation aloft. Let computers do what computers are good at, let humans do what humans are good at. Old but true. We know from decades of digital life that humans aren’t so good at security, and that computers aren’t so hot at it either. In another old saying — give us the tools and we can finish the job. Mythos isn’t a tool that can let us do that, not yet. AI in general seems determined to make things worse. Now, at last, we can see a path forward, a different way of doing things that is likely to actually happen. What was a threat landscape can become a garden where good things grow. That’s no myth, that’s the future. ®
Mon, 27/04/2026 - 09:30
AI vuln-hunter finds what humans taught it to find. Funny that
Opinion In retrospect, calling it Mythos made it a hostage to fortune. Anthropic may have hoped that the name implied its AI code security model had mythical god-like powers, but there's an alternate reading. Another definition for Mythos is a set of beliefs of obscure origin which are incompatible with reality.…
Mon, 27/04/2026 - 01:01
Join us for this week's Kettle as we dive into GCN and the latest not-so-alarming revelations about Mythos
KETTLE If you needed further evidence that AI comes first in pretty much everything nowadays, look no further than this year's Google Cloud Next show, which happened last week.…
Sun, 26/04/2026 - 10:28
OPINION Cal.com has closed its commercial codebase, abandoning years of AGPL-3.0 licensing in a move that has alarmed the developer community that helped build it and sent ripples through the broader open source world. "Open source is dead," says Cal.com co-founder and CEO Bailey Pumfleet. But my conversations with top open source developers such as Linux kernel maintainer Greg Kroah-Hartman suggest it is not. And I really don't think it is. Punfleet made this declaration because the company is moving its main program from the GNU Affero General Public License (AGPL) to a proprietary license, as he sees AI as too much of a threat to the program's security. Or, as he told me, "AI attackers are flaunting that transparency," so "Open source code is basically like handing out the blueprint to a bank vault. And now there are 100× more hackers studying the blueprint." If that sounds familiar, it should. It's an ancient argument that letting people read your code automatically makes it more vulnerable. It wasn't true in the '90s; it's not true now. Consider, if you will, that almost all commercial code today is open source. If anything, open source has proven to be far more secure than proprietary code over the years. Now it is true that AI makes finding security holes easier and faster than ever. In particular, everyone's nervous these days that the Anthropic Mythos Preview will drown the maintainers of smaller open-source projects in a flood of bug reports. It's also true that some security reports, such as Black Duck's 2026 Open Source Security and Risk Analysis (OSSRA) paper, claim there's been a 107 percent surge in open source vulnerabilities per codebase. Indeed, lending support to Pumfleet's argument, Jason Schmitt, Black Duck's CEO, claims, "The pace at which software is created now exceeds the pace at which most organizations can secure it." On the other hand, with AI, we can also hope to patch newly discovered security holes as they're found. Cal, clearly, doesn't want to take that chance. Or, perhaps, as he indicated, Pumfleet feels the company can't afford it. For, as Drew Breunig, a well-regarded tech strategist, argued in a recent blog post, code security has now come to "a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them." In a way, this is a restating of Linus's Law. Today, instead of "given enough eyeballs, all bugs are shallow," perhaps it should be restated as "given enough tokens, all bugs are shallow." That presumes, of course, that you can afford enough tokens to stay ahead of your attackers. Simon Willison, Django co-creator, however, argues, "Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private." Needless to say, some would-be competitors are making hay about Cal's sudden policy shift. Ryan Sipes, Mozilla Thunderbird Product & Business Development Manager, said on YComb: "Our scheduling tool, Thunderbird Appointment, will always be open source. Come talk to us and build with us. We'll help you replace Cal.com." By and large, though, the developer community isn't buying Cal's story. On Reddit, one person wondered how serious Cal has ever been about security. Citing several recent patches for security holes, he commented, "These problems were not the result of sophisticated hacking; they stemmed from fundamental oversights in authentication and access control." One cynical comment in Slashdot stated, "If the tools are so good that you are afraid they will be used to expose your security flaws... maybe you should use the tools to find the security flaws yourself, and then fix them rather than declaring security through obscurity. This is a fig leaf over the desire to back out of the open-source community now that the product has reached profitability." Thinking of security by obscurity, Peter Steinberger, creator of OpenClaw, tweeted, "If you look at GPT 5.4-Cyber and its ability for closed source reverse engineering, I have bad news for you." In case you haven't looked at GPT 5.4-Cyber yet, OpenAI's answer for Mythos, OpenAI claims it can reverse engineer binaries to source code. If it can deliver on that promise, you can kiss the always bogus "security by obscurity" argument goodbye for good. We'll finally get to see what's really inside Windows – and won't that be fun!. And, oh yes, dropping open source to improve your security will stop being a thing. Mind you, to date, no other companies or projects have followed Cal's relicensing footsteps. I doubt any will. Yes, AI is radically changing open source programming. I don't pretend to understand what open source coding will look like by this time next year. AI's transformation of programming is too broad for me to even make an educated guess. What I can say, though, is that we'll be better off learning how to use AI and open source together rather than retreating into old, discredited proprietary licensing models. ®
Sun, 26/04/2026 - 10:28
Cal.com considers AGPL a license to drill, but not everyone feels that way
Opinion Cal.com has closed its commercial codebase, abandoning years of AGPL-3.0 licensing in a move that has alarmed the developer community that helped build it and sent ripples through the broader open source world.…
Sat, 25/04/2026 - 10:28
A previously unknown threat group using tried-and-tested social engineering tactics - Microsoft Teams chat invitations and helpdesk staff impersonation - is also using custom malware in its data-stealing attacks, according to Google's Threat Intelligence Group. The threat hunters say they spotted a "large email campaign" in late December 2025. The attack started by spamming target organizations with an overwhelming amount of email traffic. Then someone posing as helpdesk personnel would reach out via Microsoft Teams to offer help with the email volume. The fake helpdesk worker prompts the user to click a link that supposedly installs a local patch that prevents email spamming. This directs victims to a landing page masquerading as a "Mailbox Repair Utility" complete with a "Health Check" button that, when clicked, prompts users to authenticate using their email and password, allowing the attackers to nab them. The credential-harvest script also uses a sneaky "double-entry" psychological trick that auto-rejects the first and second password attempts as incorrect. "This serves two functions: it reinforces the user's belief that the system is legitimate and performs real-time validation, and it ensures that the attacker captures the password twice, significantly reducing the risk of a typo in the stolen data," according to GTIG. The phishing page then performs a fake mailbox integrity check, which keeps the victim engaged while credentials and metadata are sent to an attacker-controlled Amazon S3 bucket and staged files continue downloading onto the user's machine. "By the time the user receives a 'Configuration completed successfully' message, the attacker has secured the credentials and potentially established a persistent foothold on the endpoint using these staged files," the Googlers wrote. The first stage downloads an AutoHotKey binary and an AutoHotkey script, which immediately starts performing reconnaissance and installs a malicious Chromium browser extension called SnowBelt. (It's not available through the Chrome Web Store - only via social engineering tactics.) Snow malware UNC6692 uses the SnowBelt extension to download its other custom "Snow" named malware, along with additional AutoHotkey scripts, and a ZIP archive containing a portable Python executable and required libraries. The Snow malware, we're told, operates as a modular ecosystem with three primary components: SnowBelt, SnowGlaze, and SnowBasin. SnowBelt, a JavaScript-based backdoor delivered as a Chromium browser extension, gives the attacker an initial foothold and maintains persistence via the browser's extension registration system. It often hides behind names like "MS Heartbeat" or "System Heartbeat." SnowGlaze is a Python-based tunneler that runs in both Windows and Linux environments and manages the external communication. It creates an authenticated WebSocket tunnel between the victim's internal network and the attacker's command-and-control (C2) infrastructure, such as a Heroku subdomain. It also disguises malicious traffic by wrapping data in JSON objects and Base64 encoding it for transfer via WebSockets, which makes it look like legitimate, standard encrypted web traffic. Finally, SnowBasin is a Python bindshell providing interactive control over the infected system. It serves as a persistent backdoor, operating as a local HTTP server and typically listening on port 8000, allowing remote command execution, screenshot capture, and data staging for exfiltration. "This component is where active reconnaissance and mission completion occur," the threat hunters noted. "Attacker commands (such as whoami or net user) are sent through the SnowGlaze tunnel, intercepted by the SnowBelt extension, and then proxied to the SnowBasin local server via HTTP POST requests. SnowBasin executes these commands and relays the results back through the same pipeline to the attacker." These types of interactive social engineering tactics have proven very profitable for cybercrime groups like ShinyHunters and Scattered Lapsus$ Hunters. Google analysts, however, told The Register that there's no overlap between those crews and this new group, which it tracks as UNC6692. Google's analysis of UNC6692 and its Teams-led social engineering campaign follows a warning from Microsoft about criminals abusing Microsoft Teams communications and impersonating helpdesk personnel to snare users and then remotely control and infect victims' machines. Despite the similarities, Google's security researchers told us that the two campaigns don't seem to be related. They are a good reminder, though, of the increasing number of digital scammers using very convincing social engineering tactics alongside legitimate cloud services and tools to gain a foothold in organizations' IT environments. ®
Sat, 25/04/2026 - 10:28
Coming in cold with custom Snow malware
A previously unknown threat group using tried-and-tested social engineering tactics - Microsoft Teams chat invitations and helpdesk staff impersonation - is also using custom malware in its data-stealing attacks, according to Google's Threat Intelligence Group.…
Fri, 24/04/2026 - 17:03
Silicon often from US, but the kit from APAC and elsewhere
America's telco regulator has clarified its ban on foreign-made routers also includes mobile hotspots and domestic routers that use a 5G cellular connection to the internet.…
Fri, 24/04/2026 - 16:35
Carnival Corporation, the world's largest cruise company, is dealing with choppy waters after Have I Been Pwned flagged what it claimed were 7.5 million unique email addresses all allegedly tied to one of its subsidiaries. According to HIBP, the haul totals 8.7 million records and appears to relate to the Mariner Society loyalty program run by Holland America Line, a subsidiary of Carnival Corporation. It said the "data contained fields indicating it related to the Mariner Society loyalty program run by Holland America." The exposed data includes names, dates of birth, genders, and membership status details – the kind of personal data attackers can easily repurpose for fraud or phishing. The company acknowledged a security incident, according to HIBP, but its version of events is, for now, a lot more contained. Carnival says the breach involved a phishing attack against a single user account and said it is still working to understand the scope of any unauthorized access. That's not quite the story being told elsewhere. The data was published by the ever-busy ShinyHunters extortion crew, which claimed to have lifted not just customer data but "terabytes of internal corporate data" after talks with the company apparently went nowhere. "The company failed to reach an agreement with us despite our incredible patience," said a post on the group's leak site, seen by The Register, adding, "They don't care." Take the claims with the usual pinch of sea salt – ShinyHunters has form for dressing up its hits – but the volume and apparent legitimacy of the data flagged by HIBP suggest there is potentially something more substantial here than the usual leak site bravado. The Register has asked Carnival to confirm whether the figures match its own findings, what data was accessed, whether any ransom demand was made, and how attackers got in. It hadn't responded at the time of writing. ShinyHunters is no stranger to this kind of break-in, usually getting a foot in the door via phishing, stolen logins, or by cracking into SaaS platforms before digging around for anything they can cash in. If their claims are accurate, this went well beyond a single compromised inbox. Whether this turns out to be a contained phishing mishap or a full-blown data spill is still unclear – but either way, passengers may want to keep a closer eye on their inboxes than their next itinerary. ®
Fri, 24/04/2026 - 16:35
Leak-site bragging meets breach hunters as Have I Been Pwned flags millions of records
Carnival Corporation, the world's largest cruise company, is dealing with choppy waters after Have I Been Pwned flagged what it claimed were 7.5 million unique email addresses all allegedly tied to one of its subsidiaries. …
Pages