HackerNews Digest

March 12, 2026

ICE/DHS gets hacked, all Contractors exposed

The page, titled “DHS Contracts Explorer,” presents a resource for examining leaked data originating from the Department of Homeland Security’s Office of Industry Partnership. The dataset was publicly released by the transparency organization DDoSecrets on March 1 2026. The excerpt provides only a brief description, indicating that the site functions as an explorer or interface for the disclosed contract‑related information, but it contains no additional details about the content, format, or analytical tools available. The primary purpose appears to be to make the hacked DHS contract data accessible for review or analysis, with the publication date and source (DDoSecrets) being the only concrete metadata supplied.
Read full article →
The comments express mixed reactions to the disclosed contractor list, questioning its completeness and noting that many firms, including some with sizable contracts, are missing. Users are uncertain why certain contract details are treated as confidential, recalling that award information is typically public, and seek clarification from experts. Observations highlight the prevalence of bland, non‑descriptive company names and the inclusion of academic institutions. Overall, the tone combines curiosity, mild criticism, and a light‑hearted acknowledgment of the leak’s unexpected timing.
Read all comments →

Show HN: s@: decentralized social networking over static sites

s@ (sAT Protocol) is a decentralized social‑networking protocol that runs entirely on static websites. Each user’s domain name serves as their identity, authenticated via HTTPS/TLS. User data is stored in encrypted JSON files within a `/satellite/` directory; a discovery document (`satproto.json` or a custom `satproto_root.json`) publishes the protocol version and the user’s X25519 public key. Encryption uses a 256‑bit symmetric content key (XChaCha20‑Poly1305) sealed per follower with libsodium `crypto_box_seal`. The user’s own sealed box (`keys/_self.json`) contains the content key and publishing secrets, enabling device recovery. Unfollow triggers key rotation, re‑encryption, and new envelopes. Posts are individually encrypted (`posts/{id}.json.enc`), listed in a plaintext `posts/index.json` (newest‑first). IDs follow `ISO8601‑compact‑UTC-4hex`. Follow lists are plain JSON (`follows/index.json`). Clients aggregate feeds by reading follow lists, fetching each followed site’s key envelope, decrypting posts, and merging them by `created_at`. Replies are top‑level only, grouped under the original post. Publishing creates a new encrypted post, pushes it via the host’s API (e.g., GitHub), and updates the index. The protocol is hosting‑agnostic, requiring no servers or relays.
Read full article →
The discussion centers on decentralized social networking, emphasizing the appeal of static‑site approaches for self‑sovereignty while questioning their scalability and the need for complex cryptographic signing. Comparisons are drawn to earlier models such as FOAF, Pingback, and Webmention, as well as to Nostr and its relay‑based discovery, highlighting trade‑offs between infrastructure independence and polling overhead. Suggestions include exploring hybrid solutions, leveraging existing tools like Git, and simplifying interaction mechanisms, reflecting a cautious optimism about finding a practical middle ground.
Read all comments →

Temporal: A nine-year journey to fix time in JavaScript

Temporal is a new ECMAScript datetime API that replaces the legacy Date object after a nine‑year TC39 effort (Stage 0 → Stage 4, finalized for ES2026). It offers immutable, value‑semantic types with explicit time‑zone, calendar, and nanosecond precision: Temporal.ZonedDateTime, Instant, PlainDate/Time/DateTime, PlainYearMonth, PlainMonthDay, and Duration. These types eliminate mutable bugs, ambiguous parsing, and month‑rollover errors, and they support built‑in calendars (e.g., Hebrew) and correct DST transitions. The proposal originated from Bloomberg’s production needs and was championed by engineers from Bloomberg, Microsoft, Google, Igalia, and others. Implementation required a large spec (≈4,500 Test262 tests) and a cross‑engine Rust library temporal_rs, which enabled Firefox, V8, and Boa to adopt the feature with shared code and consistent quality. Temporal shipped in Firefox 139, Chrome 144, Edge 144, TypeScript 6.0 Beta, Safari Tech Preview, and will appear in Node 26. Ongoing work includes integration with DOM date‑pickers and high‑resolution timestamps.
Read full article →
The overall reaction is largely positive, emphasizing that Temporal’s explicit handling of instants, calendars and time zones resolves many long‑standing bugs and aligns JavaScript with more robust date‑time APIs seen in other languages. Contributors are praised for the extensive, volunteer‑driven effort and the polyfill’s usefulness. At the same time, users note drawbacks such as increased verbosity, the difficulty of serializing Temporal objects, missing features like interval types or broader calendar conversion, and uneven browser support, especially in Safari and Opera. These concerns temper the enthusiasm but do not outweigh the general approval.
Read all comments →

Many SWE-bench-Passing PRs would not be merged

The study evaluated 296 AI‑generated pull requests (PRs) that passed the SWE‑bench Verified automated grader by having four active maintainers review PRs from three repositories (scikit‑learn, Sphinx, pytest). Compared to a “golden” baseline of 68 % human‑written PRs actually merged, maintainer decisions accepted only about half of the grader‑passing PRs, a deficit of ≈ 24 percentage points. The yearly improvement rate for maintainer merges was ~9.6 pp/yr slower than for the automated grader, a weakly significant trend. Rejections were most often due to code‑quality issues, followed by breaks to other code and core‑functionality failures. Anthropic models (Claude 3.5 → 4.5) showed gains mainly in code quality, while GPT‑5 lagged on that metric. The analysis assumes no false negatives from the grader (≈ 3.7 % observed) and normalizes scores to the golden baseline. Limitations include a single benchmark subset, lack of CI during review, static patch evaluation, and possible shifts in maintainer standards as AI adoption grows.
Read full article →
Comments highlight that while SWE‑bench and similar test‑driven evaluations reliably verify functional correctness, they often miss dimensions crucial for real‑world merging such as adherence to project conventions, code readability, architectural fit, and long‑term maintainability. Participants note that agents lack accumulated repository context, leading to unnecessary abstractions or pattern violations, and suggest incorporating metrics like diff size, abstraction depth, or code‑base entropy. Human reviewer bias and skepticism toward AI‑generated patches are also mentioned, emphasizing that passing tests alone does not guarantee production‑ready code.
Read all comments →

Don't post generated/AI-edited comments. HN is for conversation between humans

The Hacker News guidelines define permissible content and community behavior. On‑topic submissions are any material that would interest “good hackers,” emphasizing intellectual curiosity, while politics, crime, sports, and celebrity items are generally off‑topic unless they reveal a novel phenomenon. Submission rules require original sources, neutral titles (no uppercase emphasis, exclamation points, or promotional language), removal of site names from titles, and avoidance of gratuitous numbers unless meaningful. Video or PDF links must be flagged with “[video]” or “[pdf]”. Self‑promotion is limited; the platform’s primary purpose is curiosity‑driven sharing. Comments must be thoughtful, substantive, and respectful: avoid snark, name‑calling, flamebait, shallow dismissals, or political/ideological battles. Reply to arguments, assume good faith, and cite article content directly rather than questioning readership. Do not solicit votes, repost deleted items, or use throwaway accounts routinely. AI‑generated or edited comments are prohibited, as is discussing voting mechanics or comparing HN to Reddit. Violations should be flagged or reported to [email protected].
Read full article →
The comments show broad agreement that unchecked AI‑generated posts threaten Hacker News’ focus on thoughtful, human‑driven discussion, prompting support for the new rule and calls for labeling or flagging AI content. Many argue that AI assistance for spelling, translation, or personal drafting is acceptable when disclosed, while others see any AI‑enhanced comment as low‑value and prefer strict enforcement. Practical concerns surface about detection, moderation workload, and fairness to non‑native speakers. Overall, the community leans toward limiting AI‑only contributions but acknowledges nuanced use and implementation challenges.
Read all comments →

Making WebAssembly a first-class language on the Web

WebAssembly has matured since its 2017 debut, adding features such as shared memory, SIMD, exception handling, tail calls, 64‑bit memories, and GC support. Despite these advances, it remains a second‑class language on the web because it cannot interact with the platform directly; all access to Web APIs and module loading must pass through JavaScript. Loading requires manual use of the WebAssembly JS API (fetch, instantiateStreaming, imports), while even a simple `console.log` call needs custom glue code to decode memory, bind globals, and wrap the API. This glue incurs runtime overhead, complicates build pipelines, and forces developers to understand JavaScript in addition to their source language. Compilers must generate language‑specific JS shims, and standard toolchains do not produce web‑ready Wasm binaries, leading to reliance on unofficial distributions. Documentation is JavaScript‑centric, further raising the barrier. The proposed WebAssembly Component Model aims to provide a self‑contained executable format that handles loading, linking, and direct Web API usage without JavaScript, offering cross‑language interoperability and a more first‑class developer experience.
Read full article →
Comments portray WebAssembly as a useful complement to JavaScript for compute‑heavy tasks, but many stress that the current toolchain and glue‑code overhead create significant friction. The emerging component model is praised for its potential to simplify interfaces and enable multi‑language modules, yet users cite its early‑stage complexity, unclear standards, and integration challenges with the DOM, GC, and feature detection. Opinions diverge between optimism about future performance and sandboxing benefits and skepticism that WebAssembly will ever become a first‑class, broadly adopted web technology.
Read all comments →

Tested: How Many Times Can a DVD±RW Be Rewritten? Methodology and Results

The test measured how many rewrite cycles DVD±RW and DVD‑RW discs can endure using Opti Drive Control (ODC) on a Lite‑On iHAS120‑6 burner (plus a second unit in a USB‑2.0 enclosure). A Python pyautogui script automated write‑verify, transfer‑rate (TRT) and quality‑scan cycles, capturing screenshots for analysis with OpenCV, Pillow and NumPy. Failure was defined as the first verification error; quality scans alone were not decisive because readable discs can still show high error values. Key findings (first‑failure cycles): - Memorex 8× DVD+RW: 106 cycles (stuck at 6× after initial 8×). - Sony 6× DVD‑RW (two samples): 204 / 223 cycles, with frequent fallback to 4×. - Victor JVC 6× DVD‑RW: 639 cycles, occasional error‑rate swings after ~130 cycles. - TDK 4× DVD+RW (v1‑v3): 413, 218, 850 cycles respectively; outer‑edge degradation prominent. - Verbatim 4× DVD+RW: 96 cycles, high jitter. - TDK 2× DVD‑RW: >1000 cycles (test stopped at 1008), implying >2000 cycles if full erase counts. - Maxell 2× DVD‑RW: 327 cycles. Overall, only the TDK 2× DVD‑RW exceeded the nominal 1000‑cycle claim; most “plus” format discs failed far earlier, typically a few hundred rewrites. Drive wear was modest (≈4,000 h, 5,200 burns across two units). The experiment highlights variability among media, the impact of write speed on longevity, and limited predictive value of quality‑scan metrics alone.
Read full article →
The comments express strong appreciation for the extensive testing of DVD‑RWs, noting the impressive durability demonstrated by thousands of hours and hundreds of burns. Users recall personal experience with DVD‑RWs and DVD‑RAM, describing them as reliable and enjoyable despite their reduced relevance today due to network transfers and flash storage. Several remarks correct the common misconception that rewritable DVDs share the same lifespan as read‑only discs, and a brief tip about disabling Windows updates via metered connections is also shared.
Read all comments →

I was interviewed by an AI bot for a job

The article examines AI‑driven interview platforms that conduct video‑based, one‑on‑one assessments by asking questions and evaluating candidates’ responses. Companies such as CodeSignal, Humanly, and Eightfold market these tools as ways to interview a larger applicant pool and to reduce bias by focusing on answer content rather than visual cues. The author tested three such AI interviewers on both fabricated and real Vox Media job listings, noting varying levels of conversational naturalness but consistently preferring human interviewers. The piece highlights that truly bias‑free AI is unattainable because underlying models inherit sexism, racism, and other prejudices from internet training data. A linked video demonstrates the author’s hands‑on experience with the platforms.
Read full article →
Comments convey strong dissatisfaction with AI‑driven interview systems, describing them as impersonal, inefficient, and disrespectful of candidates’ time, often obscuring bias and failing to allow clarification. Many view such tools as evidence of dehumanizing hiring practices that signal poor workplace treatment. While a minority acknowledge that automated screening can help manage large applicant volumes and reduce repetitive human effort, the prevailing view holds that current processes are broken, lack transparency, and favor human interaction or internal referrals over bot‑based assessments.
Read all comments →

Show HN: A context-aware permission guard for Claude Code

nah is a context‑aware permission guard for Claude Code that replaces simple allow/deny tool controls with granular, action‑type policies. It intercepts every tool invocation via a PreToolUse hook, classifies the call into one of ~20 built‑in action types (e.g., filesystem_read, filesystem_delete, git_history_rewrite, network_outbound, lang_exec) using a deterministic structural classifier, and applies a default policy (allow, ask, block). Ambiguous cases can be escalated to an LLM (supported providers: Ollama, OpenRouter, OpenAI, Anthropic, Snowflake Cortex) which may resolve “ask” decisions but cannot override a block. Policies are defined in global ~/.config/nah/config.yaml and can be tightened per‑project with .nah.yaml; they may specify actions, paths, trusted hosts, or custom command classifications. The system logs all decisions for inspection and offers CLI commands for install, uninstall, testing, policy adjustment, and log querying. A built‑in demo runs 25 live cases across threat categories (RCE, data exfiltration, obfuscation) in ~5 minutes. The tool can operate in profiles (full, minimal, none) and supports dry‑run testing of commands. MIT licensed.
Read full article →
Comments show a generally positive view of the action‑type classification and deterministic context system as a clearer abstraction than simple allow/deny lists, with many noting its alignment to risk reasoning and reduced permission fatigue. At the same time, reviewers raise concerns about the static taxonomy, maintenance overhead of a policy DSL, and limited protection against adversarial or chained commands, suggesting session‑aware models or sandboxing as needed. Practical issues such as installation, virtual‑environment management, and confusion over “dangerously‑skip‑permissions” flags also surface, reflecting a mix of appreciation and caution.
Read all comments →

Google closes deal to acquire Wiz

Wiz has officially become a Google company, maintaining its mission to protect all assets organizations build and run while adapting to the rapid pace of AI‑driven development. The blog notes that AI now accelerates application delivery, requiring security that scales with speed. Over the past year Wiz Research disclosed several high‑impact vulnerabilities—including an exposed Moltbook database, the CodeBreach supply‑chain flaw, the CVSS 10.0 Redis RCE “RediShell,” the NVIDIAScape container escape, and supply‑chain attacks such as Shai‑Hulud and NX—while hosting the ZeroDay.cloud hacking competition. Product advances introduced the Wiz AI Security Platform (visibility and protection for AI workloads), Wiz Exposure Management (unified risk view from code to cloud), AI Security Agents (automated investigation and remediation), and WizOS (hardened container base images). Wiz remains a multi‑cloud solution serving most Fortune 100 firms, AI labs, and cloud‑native companies across AWS, Azure, GCP, and OCI. Integration with Google’s infrastructure, Mandiant intelligence, and the Unified Security Platform is positioned to enhance its security capabilities.
Read full article →
The comments combine factual observations with critical reactions. Reported accusations that a Wiz board member paid bribes to CISOs and the unprecedented arrangement for Israeli taxes to be paid in dollars are highlighted as concerning. Many express unease about Google’s acquisition reducing competition and potentially compromising Wiz’s cloud‑agnostic advantage, while others note strategic benefits such as deeper integration with Google’s security offerings. The tone is largely skeptical, citing monopoly risks, tax‑policy implications, and corporate‑culture jokes, though a few remarks acknowledge the company’s quality and congratulate the team.
Read all comments →