HackerNews Digest

April 02, 2026

Live: Artemis II Launch Day Updates

NASA’s Artemis II crewed test flight launched from Kennedy Space Center’s Pad 39B at 6:35 p.m. EDT on a Space Launch System (SLS) rocket. The vehicle’s twin solid rocket boosters, providing ~75 % of lift‑off thrust, separated at +02:09 MET, followed by core‑stage main‑engine cutoff and separation at +08:02 MET and +08:14 MET respectively. The interim cryogenic propulsion stage (ICPS) ignited, and the Orion spacecraft “Integrity” deployed its four solar‑array wings (SAWs) about 18 minutes after launch, each wing containing 15,000 cells and spanning a total 63 ft wingspan. Subsequent perigee‑raise (PRM) and apogee‑raise (ARB) burns will lift the orbit for deep‑space operations. The mission crew—NASA astronauts Reid Wiseman, Victor Glover, Christina Koch, and CSA astronaut Jeremy Hansen—will conduct a proximity‑operations demonstration with the ICPS, test life‑support systems, and complete a ~10‑day lunar flyby, marking the first crewed deep‑space flight in over 50 years and a stepping stone toward future Moon and Mars missions.
Read full article →
Comments show widespread enthusiasm and emotional resonance for Artemis II launch, with many expressing awe, personal significance, and hope for humanity. Several note technical interest, especially concerns about heat‑shield safety, cost, and production quality, comparing NASA coverage to SpaceX. Some critique the program’s budget and political context, while others emphasize its symbolic value and potential to unite people. Overall sentiment leans positive, tempered by safety and fiscal reservations.
Read all comments →

Quantum computing bombshells that are not April Fools

Two recent quantum‑computing announcements significantly lower resource estimates for cryptographic attacks. A Caltech team, including John Preskill, presented fault‑tolerant protocols using high‑rate quantum error‑correcting codes that reduce overhead, especially for architectures supporting non‑local gates such as neutral‑atom arrays or trapped ions. Separately, Google disclosed a more compact implementation of Shor’s algorithm capable of breaking 256‑bit elliptic‑curve cryptography; the result was released via a cryptographic zero‑knowledge proof that confirms the circuit’s existence without revealing its structure. Combined, these advances shrink the projected hardware needed to compromise Bitcoin signatures from millions of physical qubits to roughly 25 000 qubits, compressing the timeline for viable quantum attacks. The author emphasizes that, while fundamental quantum principles remain unchanged, the new quantitative thresholds strengthen the case for immediate migration to quantum‑resistant cryptographic schemes.
Read full article →
The comment expresses skepticism toward the practical usefulness of quantum computing while acknowledging its appeal, questioning whether it can perform even basic calculations. It critiques the Bitcoin community’s portrayal of quantum threats as limited, arguing that the underlying cryptographic scheme is fundamentally vulnerable to quantum attacks and asking if credible post‑quantum solutions exist for P2SH wallets. The author also references personal financial interest in quantum hardware, hoping for a breakthrough but remaining doubtful about current capabilities.
Read all comments →

A new C++ back end for ocamlc

The pull request titled “C++ support by stedolan” (PR #14701) is opened in the ocaml/ocaml repository on GitHub. The conversation displayed for the PR contains a single message stating “You can’t perform that action at this time.” No additional discussion, code changes, or review comments are provided in the extracted text. The page includes several images whose alt‑text labels list GitHub usernames: @stedolan, @redianthus, @dra27, @avsm, @sergezloto, @ducminhgd, and @AdelKS. No further technical details, change descriptions, or contextual information about the C++ support implementation are present in the scraped content.
Read full article →
The comments express strong approval and appreciation, highlighting a technical tip that the C++ interpreter lacks tail‑call optimization and recommending idiomatic functional patterns (reverse, map, range, filter) to avoid stack overflows and improve portability. Maintainers are urged to favor these approaches for long‑running code. The tone remains upbeat, with light humor referencing well‑known figures, but overall the feedback is positive and focused on practical, maintainable coding practices.
Read all comments →

DRAM pricing is killing the hobbyist SBC market

Raspberry Pi announced further price hikes for all models with LPDDR4 RAM, including a 3 GB Pi 4 at US $83.75 and a 16 GB Pi 5 at US $299.99. The increases reflect a surge in DRAM costs, which now represent the majority of component expenses for most hobbyist single‑board computers (SBCs). Consequently, boards with more than 4 GB RAM have become unaffordable for many hobbyists, and new SBC releases have sharply declined, with Radxa being the only vendor maintaining a recent product cadence. Comparable mini‑PCs now exceed US $250 for 8 GB configurations, and used PCs with similar memory are also pricier. Hobbyist developers are reverting to older SBCs or microcontrollers to keep project costs below US $100. While Raspberry Pi’s leadership expects DRAM prices to eventually fall, the timing is uncertain, and smaller SBC manufacturers may struggle to survive if current pricing trends persist.
Read full article →
Comments converge on the view that soaring DRAM costs, driven by AI data‑center demand, supply‑chain constraints, and geopolitical factors, are sharply reducing low‑ and mid‑range smartphone volumes and pushing hobbyist single‑board computers, especially higher‑memory Raspberry Pi models, into an unaffordable range. Users report price spikes for memory, SSDs and SBCs, express frustration, and suggest postponing purchases, repurposing older hardware, or shifting to used x86 mini‑PCs. While most consider the surge temporary and anticipate eventual price easing, the prevailing sentiment remains concern over reduced accessibility for hobbyists and budget consumers.
Read all comments →

EmDash – A spiritual successor to WordPress that solves plugin security

EmDash is an open‑source, MIT‑licensed CMS written in TypeScript and built on the Astro framework. It runs serverlessly on platforms such as Cloudflare Workers, scaling to zero when idle and billing only for CPU time. Plugins execute in isolated Dynamic Workers and must declare required capabilities (e.g., read:content, email:send) in a manifest, providing a strict, OAuth‑like permission model that eliminates the 96 % of WordPress‑related security issues caused by unrestricted PHP plugins. Because plugins are sandboxed and license‑agnostic, developers can choose any license and distribute code without a central marketplace, reducing lock‑in. EmDash includes native x402 support for pay‑per‑use content charging, passkey‑based authentication with role‑based access control, and AI‑native tooling (MCP server, CLI, and Agent Skills) for automated migration and site management. A WordPress exporter allows rapid import of posts, media, and custom types. The v0.1.0 preview is available on GitHub, deployable to Cloudflare or any Node.js server, with an online Playground for testing.
Read full article →
The comments show a mixed reaction to the new CMS. Many acknowledge its TypeScript foundation and sandboxed worker plugins as technically promising, especially for addressing WordPress’s longstanding plugin‑security issues. However, a substantial portion questions the practical benefits, noting reliance on Cloudflare’s runtime, potential vendor lock‑in, larger footprint, and limited ecosystem compared with WordPress’s mature plugin market. Concerns also surface about cost controls, ease of deployment, and the credibility of AI‑generated code, resulting in cautious optimism tempered by skepticism about adoption and real‑world impact.
Read all comments →

Fast and Gorgeous Erosion Filter

The post describes a GPU‑friendly erosion filter implemented as a directional noise function that can be applied as a height‑map overlay. The technique builds on Clay John’s 2018 “Eroded Terrain Noise” shader and Felix Westin’s 2023 refinement, adding intuitive parameters and two alternative ways to handle flat areas. Basic operation: compute the gradient of an underlying height field, generate stripe patterns aligned with the gradient in a cell‑based Worley layout, and blend neighboring cells so the stripes form gullies and ridges. Two methods preserve peaks and valleys: the **frequency approach** scales stripe frequency with slope (thickening stripes on flat terrain) and avoids peak bulging but creates valley bulges; the **fade approach** keeps stripe width constant and fades stripe amplitude toward a value derived from altitude, allowing crisp V‑shaped valleys when combined with a custom easing curve. Further enhancements for sharp, branching features include **stacked fading** (masking each octave with previous‑octave creases), **normalized gullies** (unit‑circle normalization of interpolated sine/cosine pairs with thresholded scaling to prevent loops), and **straight gullies** (restricting small‑scale gullies from running parallel to larger ones). The final filter yields fast, chunkable erosion with controllable detail, and the underlying directional noise (Phacelle Noise) is released separately.
Read full article →
The comments are broadly positive, highlighting the erosion technique’s visual quality, speed, GPU friendliness, and ability to generate terrain in parallel chunks. Reviewers appreciate the clear presentation and see potential for realistic 3D game maps and advanced tooling. Several remarks reference past terrain software and suggest extending the method toward scientific validation, such as matching lidar statistics or geological histories, while noting that without true erosion simulation the results may lack precise geological fidelity. Overall, enthusiasm for the approach and its future applications dominates.
Read all comments →

Set the Line Before It's Crossed

None
Read full article →
The comments express overall skepticism toward rigidly defining personal “lines,” arguing that such boundaries often cement current beliefs, hinder adaptability, and can cause harm when applied indiscriminately—especially in situations like unpaid loans or minor disputes. While a few acknowledge that clear limits may be protective in extreme cases such as domestic abuse, most highlight the need for context, practice, and flexibility, describing the proposed approach as impractical, privileged, and potentially damaging when enforced without nuance.
Read all comments →

Show HN: Git bayesect – Bayesian Git bisection for non-deterministic bugs

Git _Bayesect_ is a Python tool that applies Bayesian inference to git bisect, identifying the commit where a change in event likelihood (e.g., increased test flakiness) occurs. It selects commits by greedily minimizing expected entropy and uses a Beta‑Bernoulli conjugacy to handle unknown failure rates. Users start a bisection with `git bayesect start --old ` (optionally specifying `--new`), record observations via `git bayesect pass`/`fail` (or on a specific commit), and check status with `git bayesect status`. Priors can be set per commit (`git bayesect prior --commit --weight `), derived from filenames (`priors_from_filenames`) or commit text (`priors_from_text`) using user‑provided callbacks. The tool logs commands for reproducibility, supports undoing observations, and can run automatically with `git bayesect run `. A demo repository (`scripts/generate_fake_repo.py`) creates a fake history with a flaky script, allowing users to practice locating the commit that altered the failure probability. Installation is via `uv tool install git_bayesect`.
Read full article →
The discussion is overwhelmingly positive about the Bayesian bisect approach, noting its mathematical elegance, superior accuracy to traditional bisect under flaky conditions, and the practical benefit of incorporating structural priors to further improve results. Users highlight successful benchmarks, express interest in extending the method to handle repeated test runs, noisy performance metrics, and exposing posterior confidence. Questions arise about handling varying test costs and broader applications, but overall the tool is regarded as useful, innovative, and promising for deterministic and nondeterministic bug investigation.
Read all comments →

AI for American-produced cement and concrete

Meta is releasing BOxCrete, a Bayesian‑optimization AI model for concrete mix design, alongside the foundational dataset used for a high‑performance mix in its Rosemount, MN data center. The open‑source model (MIT license) predicts strength, slump, and other key properties, enabling rapid exploration of formulations that incorporate domestically produced cement, which accounts for ~20‑25 % of U.S. cement consumption. Partnerships include Amrize (cement producer), the University of Illinois at Urbana‑Champaign, Mortenson, and SaaS platform Quadrel, which integrates Meta’s framework into daily mix‑design workflows. Field trials in Illinois, Minnesota, and Pennsylvania show AI‑optimized mixes achieving 43 % faster strength gain and ~10 % reduced cracking risk while using U.S. materials. Meta’s Adaptive Experimentation platform iteratively learns from historical mixes and lab results, proposes high‑potential candidates under user‑specified constraints, and refines predictions with each test, accelerating discovery without altering standard validation procedures. The initiative aims to increase reshoring of cement, lower emissions, and improve supply‑chain resilience across the American concrete industry.
Read full article →
Comments show interest in AI‑assisted concrete mix design for faster experimentation and on‑site quality checks, noting existing challenges with traditional testing, mix complexity, and import reliance. Many acknowledge potential benefits such as reduced waste, cost savings, and integration with modern volumetric mixers, while expressing caution about the safety‑critical nature of concrete, the reliability of AI recommendations, and possible corporate motives. Environmental and supply‑chain concerns appear, and overall sentiment is a blend of curiosity and skepticism, viewing the approach as an early step that requires thorough validation.
Read all comments →

Signing data structures the wrong way

The post explains that cryptographic operations (signing, encryption, MAC, hashing) require data encodings that are both canonical and domain‑separated to prevent type‑confusion attacks, where distinct message types serialize to identical byte streams (e.g., TreeRoot vs. KeyRevoke). Existing ad‑hoc mitigations (context strings, method‑name hashes) are error‑prone. Snowpack, the serialization format used by FOKS, embeds a random, immutable 64‑bit domain separator directly in the IDL definition of each struct. Compilers generate a `GetUniqueTypeID()` method; signing and verification concatenate this identifier with the serialized object, while the identifier itself is omitted from the transmitted bytes. The type system enforces that only objects with a domain separator can be processed, eliminating mismatched‑type signatures. Snowpack also provides canonical encodings via positional JSON‑like arrays and Msgpack, enforcing minimal integer sizes and avoiding dictionary key ordering issues. It supports forward/backward compatibility through nullable fields and variant handling, and tooling ensures uniqueness of separators within a project. The approach is open‑sourced for Go and TypeScript, with plans for additional languages.
Read full article →
The discussion emphasizes the importance of explicit domain separation and context inclusion in signatures, favoring straightforward, readable implementations over complex binary formats and heavyweight libraries. Participants argue that JSON‑based approaches and in‑band type fields can simplify canonicalisation, while binary‑oriented designs are seen as unnecessary and potentially obscure. Concerns are raised about supply‑chain risks from precompiled cryptographic binaries, advocating minimal, auditable code. Established cryptographic principles such as the Horton principle are reinforced, and suggestions for protobuf extensions and DSSE are noted as practical, non‑novel solutions.
Read all comments →