Multiple vulnerabilities have been discovered in Google Chrome, including a critical flaw that could allow arbitrary code execution.
If successfully exploited, the most severe vulnerability could allow an attacker to execute malicious code in the context of the logged-in user. This means the attacker could potentially:
The level of impact depends on the permissions of the compromised account. Systems where users operate with administrative privileges face significantly greater risk than those following least-privilege access practices.
Google has confirmed that an active exploit for CVE-2026-2441 is already circulating in the wild, increasing the urgency for remediation.
Google Chrome Stable Channel Update
https://chromereleases.googleblog.com/2025/09/stable-channel-update-for-desktop_17.html
Multiple vulnerabilities have been discovered in Dell RecoverPoint for Virtual Machines, the most severe of which could allow for arbitrary code execution. Dell RecoverPoint for Virtual Machines is an enterprise-grade solution for VMware Virtual Machines (VMs) that enables local, remote, and concurrent replication with continuous cyber resilience and point-in-time (PiT) recovery capabilities.
Successful exploitation of the most severe vulnerability could allow for arbitrary code execution in the context of the logged-on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured with fewer user rights on the system could be less impacted than those who operate with administrative user rights.
https://www.dell.com/support/kbdoc/en-us/000426773/dsa-2026-079
Amazon has reported on a large-scale attack campaign that used commercial generative AI tools to help compromise more than 600 FortiGate devices.
Importantly, the attackers did not exploit any new vulnerabilities. Instead, they targeted exposed management ports secured with weak credentials and single-factor authentication.
What sets this campaign apart is the threat actor’s use of multiple AI services to plan attacks, generate operational guidance, and develop custom tools — including credential extractors, VPN automation scripts, and mass-scanning utilities. According to Amazon, the volume of tooling would typically suggest a well-resourced team, yet it appears to have been created by a single individual or small group using AI assistance.
This incident highlights how AI is lowering the barrier to entry for threat actors and reinforcing the importance of strong authentication, limited exposure of management interfaces, and adherence to security best practices.
Researchers at Abnormal have identified a powerful new phishing framework called “Starkiller” (unrelated to the legitimate red-teaming tool of the same name) that marks a significant evolution in phishing tactics.
Unlike traditional phishing kits that rely on fake login page clones, Starkiller operates as a live proxy, serving legitimate websites through attacker-controlled infrastructure. Victims believe they are interacting with a real login page — because they are.
By proxying authentic sites in real time, the toolkit bypasses one of the most common detection methods used by security tools: page fingerprinting. When combined with URL masking, session hijacking, and MFA bypass capabilities, Starkiller dramatically lowers the barrier to entry — giving low-skill threat actors access to advanced attack techniques that were previously far more difficult to execute.
This development signals a growing shift toward more sophisticated, harder-to-detect phishing infrastructure.
Microsoft has issued a service alert after discovering a bug that allowed Microsoft 365 Copilot to summarize emails labeled as “confidential,” potentially bypassing organizations’ data loss prevention (DLP) policies, according to BleepingComputer.
Specifically, Microsoft noted that the Copilot “work tab” Chat feature was summarizing email messages even when those messages had sensitivity labels applied and an active DLP policy configured.
While there is no indication of external compromise, the issue raised serious concerns about AI tools interacting with protected content in ways administrators did not intend.
Microsoft deployed a fix earlier this month and is actively working with affected customers to confirm the issue has been fully resolved.
The incident underscores the importance of closely monitoring how AI-enabled features interact with security controls — especially when handling sensitive or regulated data.