Skip to main content

Halo Heads To PlayStation 5 With Another Halo: Combat Evolved Remake

2 months 3 weeks ago
Halo Studios (formerly 343 Industries) has announced Halo: Campaign Evolved, a full Unreal Engine 5 remake of the original Halo: Combat Evolved campaign, coming in 2026 for Xbox Series X, Windows PC, and -- shockingly -- PlayStation 5. "It's really a new era -- Halo is on PlayStation going forward," Halo Studios community director Brian Jarrard said on a livestream today. Polygon reports: Halo: Campaign Evolved is a from-the-ground-up remake of the first Halo game's campaign. It's being built in Unreal Engine 5 -- unlike previous Halo games, which have been developed with proprietary software. It aims to modernize the game without changing it on a fundamental level. [...] As signaled by the name, Campaign Evolved will not feature PvP multiplayer, as its focus is on the campaign (Combat Evolved had splitscreen competitive multiplayer modes). However, you'll still be able to play Halo: Campaign Evolved with your buddies. It'll support splitscreen two-player local co-op as well as four-player online. Most notably, it'll support full crossplay and cross-progression. Gameplay is being changed in ways that are more aligned with later entries in the series. Master Chief will be able to pick up and use enemy weapons that he couldn't use until later Halo games, like the iconic Energy Sword. He'll be able to pilot the Covenant Wraith tank in the original game for the first time, and can hijack vehicles (or get hijacked). Campaign Evolved is also implementing a sprint button, altering the way players can move about the battlefield. You can watch a reveal video for the game on YouTube.

Read more of this story at Slashdot.

BeauHD

A Single Point of Failure Triggered the Amazon Outage Affecting Million

2 months 3 weeks ago
An anonymous reader quotes a report from Ars Technica: The outage that hit Amazon Web Services and took out vital services worldwide was the result of a single failure that cascaded from system to system within Amazon's sprawling network, according to a post-mortem from company engineers. [...] Amazon said the root cause of the outage was a software bug in software running the DynamoDB DNS management system. The system monitors the stability of load balancers by, among other things, periodically creating new DNS configurations for endpoints within the AWS network. A race condition is an error that makes a process dependent on the timing or sequence events that are variable and outside the developers' control. The result can be unexpected behavior and potentially harmful failures. In this case, the race condition resided in the DNS Enactor, a DynamoDB component that constantly updates domain lookup tables in individual AWS endpoints to optimize load balancing as conditions change. As the enactor operated, it "experienced unusually high delays needing to retry its update on several of the DNS endpoints." While the enactor was playing catch-up, a second DynamoDB component, the DNS Planner, continued to generate new plans. Then, a separate DNS Enactor began to implement them. The timing of these two enactors triggered the race condition, which ended up taking out the entire DynamoDB. [...] The failure caused systems that relied on the DynamoDB in Amazon's US-East-1 regional endpoint to experience errors that prevented them from connecting. Both customer traffic and internal AWS services were affected. The damage resulting from the DynamoDB failure then put a strain on Amazon's EC2 services located in the US-East-1 region. The strain persisted even after DynamoDB was restored, as EC2 in this region worked through a "significant backlog of network state propagations needed to be processed." The engineers went on to say: "While new EC2 instances could be launched successfully, they would not have the necessary network connectivity due to the delays in network state propagation." In turn, the delay in network state propagations spilled over to a network load balancer that AWS services rely on for stability. As a result, AWS customers experienced connection errors from the US-East-1 region. AWS network functions affected included the creating and modifying Redshift clusters, Lambda invocations, and Fargate task launches such as Managed Workflows for Apache Airflow, Outposts lifecycle operations, and the AWS Support Center. Amazon has temporarily disabled its DynamoDB DNS Planner and DNS Enactor automation globally while it fixes the race condition and add safeguards against incorrect DNS plans. Engineers are also updating EC2 and its network load balancer. Further reading: Amazon's AWS Shows Signs of Weakness as Competitors Charge Ahead

Read more of this story at Slashdot.

BeauHD

Sneaky Mermaid attack in Microsoft 365 Copilot steals data

2 months 3 weeks ago
Redmond says it's fixed this particular indirect prompt injection vuln

updated  Microsoft fixed a security hole in Microsoft 365 Copilot that allowed attackers to trick the AI assistant into stealing sensitive tenant data – like emails – via indirect prompt injection attacks.…

Jessica Lyons