Tony Hoare has died
Summary
Tony Hoare (1934‑2026), Turing Award laureate and former Oxford professor, died on 5 March 2026 at age 92. He is renowned for developing the quicksort algorithm, contributing to ALGOL design, and creating Hoare logic, among numerous foundational works in computer science. Early in his career Hoare studied classics and philosophy, then trained in Russian through the Joint Services School for Linguists, after which he demonstrated early computers internationally, including in the Soviet Union, and contributed to their code development. A noted anecdote recounts a six‑pence wager with his Elliott Brothers boss proving quicksort faster than an existing sorting method, which Hoare won and was paid. Later he worked at Microsoft’s Cambridge lab, where he occasionally visited local cinemas. Hoare expressed skepticism about popular media’s portrayal of genius, emphasizing prolonged problem‑solving over instant insight. He also hinted, with humor, at undisclosed government‑level computing capabilities beyond publicly known technology. His legacy spans algorithmic innovation, programming language theory, and influential mentorship.
Read full article →
Community Discussion
The comments convey a unified tone of respect and mourning, highlighting Hoare’s lasting influence on computing through Quicksort, Hoare logic, CSP, and his advocacy for safe language design. Readers recall personal encounters, lectures, and the humility he displayed, while noting his “billion‑dollar mistake” about null references as a cautionary lesson. Many lament that broader formal‑verification goals remain unrealized, and a few reflect on how his legacy contrasts with modern industry practices, but overall the community celebrates his profound contributions and expresses sincere sorrow at his passing.
U+237C ⍼ Is Azimuth
Summary
The Unicode character U+237C (⍼) is identified as “azimuth” (German: Azimut, Richtungswinkel). This attribution originates from a Wikipedia edit on 28 February 2025, which cited H. Berthold AG’s 1950 symbol catalogue listing the glyph with those descriptors. Scans from the 1950 Zeichenprobe (page 7) and later font catalogues (1949, 1951, 1952, page 104) show the same glyph and sizing, though the 1949‑1952 editions lack the textual label. Earlier sources, including the 1946 Registerprobe and the 1900 and 1909 catalogues, do not contain the symbol. The glyph’s shape resembles the light‑ray path in a sextant used to measure azimuth, with the right angle echoing the generic angle symbol. The article includes full‑page scans from the cited catalogues to document the glyph’s occurrence and its absence in earlier publications.
Read full article →
Community Discussion
The discussion conveys strong curiosity and enthusiasm for obscure Unicode symbols, particularly the “right angle with downwards zigzag arrow,” noting its potential utility for representing azimuth and the lack of a standard altitude symbol. Participants express admiration for the meticulous early‑20th‑century glyph production and concern about information loss when modern fonts reinterpret the glyph. There is a shared sense that many specialized symbols remain hidden in the Unicode repertoire, prompting interest in uncovering their origins and possibly extending them for niche applications.
Zig – Type Resolution Redesign and Language Changes
Summary
The recent Zig devlog details a major 30 k‑line PR that overhauled the compiler’s internal type‑resolution, making it lazy: fields of never‑instantiated types aren’t examined, allowing constructs like `std.Io.Writer` to compile without pulling in unrelated `std.Io` code. Dependency‑loop diagnostics have been improved to show exact source locations and the loop length, aiding quick resolution. Incremental compilation received extensive bug fixes and elimination of “over‑analysis” cases, yielding noticeably faster rebuilds. Dozens of additional bug fixes, niche language tweaks, and performance optimizations accompany the change.
The log also introduces experimental I/O backends (`std.Io.Evented` and `std.Io.Threaded`) based on userspace stack switching (fibers/green threads). Example programs demonstrate swapping the I/O implementation while keeping identical application logic, with `strace` output showing successful execution via `io_uring`. Using `std.Io.Evented` for the compiler works but currently incurs an unresolved performance regression. Overall, the changes streamline type handling, enhance error reporting, and expand I/O flexibility in Zig.
Read full article →
Community Discussion
No comments were provided, so there is no content to analyze or summarize.
Cloudflare crawl endpoint
Summary
The Browser Rendering API now includes a **/crawl** endpoint (open beta) that lets users crawl an entire website with a single request. Users POST a JSON payload containing a starting URL to `https://api.cloudflare.com/client/v4/accounts/{account_id}/browser-rendering/crawl`; the service returns a job ID. The crawl runs asynchronously; results are retrieved via a GET request to `…/crawl/{job_id}`.
Key capabilities:
- **Output formats**: HTML, Markdown, and structured JSON (generated by Workers AI).
- **Scope controls**: configurable depth, page limits, and include/exclude wildcard patterns.
- **Discovery**: URLs are gathered from sitemaps, page links, or both.
- **Incremental crawling**: parameters `modifiedSince` and `maxAge` skip unchanged or recently fetched pages.
- **Static mode**: `render: false` fetches static HTML without a headless browser for faster processing of static sites.
- **Compliance**: respects `robots.txt` directives, including crawl‑delay.
The endpoint is available on both Workers Free and Paid plans. Documentation provides further setup guidance, including best practices for robots.txt and sitemaps.
Read full article →
Community Discussion
The comments show a mixed but largely intrigued reaction to Cloudflare’s new crawling endpoint. Many note its technical convenience, potential to reduce redundant scraping, and usefulness for archiving or monitoring, while appreciating that it abstracts headless‑browser management. At the same time, concerns arise about possible conflicts of interest, the risk of bypassing existing bot‑protection measures, unclear pricing and rate limits, and the broader ethical implications of a provider both defending and enabling large‑scale scraping. Some users also voice frustration with unrelated support experiences.
Agents that run while I sleep
Summary
The author describes a workflow for verifying AI‑generated code using explicit acceptance criteria rather than relying on AI‑written tests or manual reviews. After writing plain‑English specifications (e.g., login flow, error messages, rate limiting), a Claude‑based agent generates code that is automatically exercised by Playwright (frontend) or curl (backend) against those criteria. Each acceptance criterion yields a pass/fail verdict with screenshots and reasoning, allowing reviewers to focus only on failures. The process consists of four stages: a Bash pre‑flight check, a planner call (Opus) that parses the spec and determines needed checks, parallel browser agents (Sonnet) that execute each criterion, and a final judge call (Opus) that aggregates evidence into JSON verdicts. The system is implemented as a Claude Skill (github.com/opslane/verify) that runs in Claude Code’s headless mode without extra backend services. The approach mirrors Test‑Driven Development by defining “done” before code generation, exposing integration issues that code review alone often misses, though it cannot detect incorrect specifications.
Read full article →
Community Discussion
The comments convey a mixed view of autonomous coding agents: many acknowledge noticeable productivity gains but stress that the tools remain costly, complex, and prone to errors that demand thorough human oversight and robust testing. Participants highlight the difficulty of maintaining reliable specifications, preventing “placeholder” code, and managing review fatigue, while warning against over‑engineered frameworks and unchecked long‑running agents. Overall, the consensus is that AI can accelerate development if paired with disciplined TDD, clear specs, and vigilant validation rather than being relied upon as a standalone solution.
Yann LeCun raises $1B to build AI that understands the physical world
Summary
Advanced Machine Intelligence (AMI), a Paris‑based startup co‑founded by former Meta chief AI scientist Yann LeCun, raised over $1 billion in financing, valuing the company at $3.5 billion. Investors include Cathay Innovation, Greycroft, Hiro Capital, HV Capital, Bezos Expeditions, Mark Cuban, Eric Schmidt and Xavier Niel. AMI’s mission is to develop AI “world models” that understand physical environments, retain persistent memory, and support reasoning, planning, safety and controllability. LeCun argues that human‑level intelligence depends on grounding in the physical world, not solely on scaling large language models (LLMs), which he considers insufficient for true intelligence. The startup plans to serve manufacturing, biomedical, robotics and similar sectors, offering realistic simulations such as aircraft‑engine models for optimization. AMI’s leadership includes CEO Alexandre LeBrun (formerly Nabla) and chief science officer Saining Xie (ex‑DeepMind), alongside former Meta researchers Michael Rabbat, Laurent Solly and Pascale Fung. Offices will operate globally from Paris, Montreal, Singapore and New York, with LeCun remaining a NYU professor while leading AMI.
Read full article →
Community Discussion
Comments show a mixed yet cautiously optimistic view of Yann LeCun’s world‑model startup. Many see the large seed round as a welcome diversification from the LLM focus, emphasizing potential for European research independence, new architectures, and employment growth. At the same time, skeptics question the feasibility of world‑model approaches, LeCun’s past results, and how the funding will be allocated, noting possible over‑hype and management challenges. Overall the discussion balances hope for novel physical‑world modeling with doubts about practical impact and execution.
SSH Secret Menu
Summary
The page consists of a placeholder layout with no substantive article content. It begins with a generic title placeholder (“[no-title]”) followed by a horizontal separator. The only textual element is an error notice: “Something went wrong, but don’t fret — let’s give it another shot.” A second separator introduces an “Images and Visual Content” section. Under this heading, a single image entry is listed, identified solely by its alt‑text, which is the warning emoji “⚠️”. No additional description, captions, or data accompany the image. The overall page therefore provides only an error message and a minimal visual placeholder, lacking any further information, analysis, or contextual content.
Read full article →
Community Discussion
The comments express enthusiasm for lesser‑known SSH escape sequences, noting their practicality for terminating sessions, managing tunnels, and replicating telnet‑style controls. Users appreciate the convenience of the built‑in escape menu and share tips such as using “~.” to exit, killing forwardings with “‑KD”, and disabling the feature via EscapeChar none. A nostalgic tone appears, recalling older software quirks, while acknowledging that such hidden options remain useful despite newer tunneling solutions. Overall sentiment is positive toward these undocumented features.
Launch HN: RunAnywhere (YC W26) – Faster AI Inference on Apple Silicon
Summary
RCLI is an on‑device voice‑AI suite for macOS (Apple Silicon, macOS 13+). It integrates speech‑to‑text (VAD + Zipformer/Whisper/Parakeet), a local large language model (Qwen3, LLM 3 B, LFM2 series) with KV‑cache, Flash Attention, and text‑to‑speech (Piper, Kokoro, etc.) into a single Metal‑GPU pipeline (MetalRT) that runs natively on M3/M3 Pro/M3 Max/M4 chips; M1/M2 fallback to llama.cpp. End‑to‑end latency is sub‑200 ms, with STT up to 714 × real‑time and LLM throughput ≈550 tokens / s. The tool exposes 38 macOS actions (AppleScript or shell) such as app control, media playback, reminders, and screen capture, reachable via voice or typed commands in an interactive TUI. RCLI also provides on‑device RAG: documents (PDF/DOCX/TXT) are indexed with hybrid vector + BM25 (≈4 ms over 5 k+ chunks) and queried via voice. Installation is via Homebrew or curl script; models (~1 GB total) are downloaded once. MetalRT is proprietary (license via [email protected]); RCLI code is MIT‑licensed.
Read full article →
Community Discussion
The comments present a mixed view: many express enthusiasm for the on‑device AI demo and its potential, while others highlight practical problems such as instability, high latency, missing features, and limited macOS support. Concerns recur about a leaked API key, unclear licensing of the proprietary engine, and the company’s alleged spam practices. Users are confused about whether the tool is a voice assistant, a general‑purpose LLM, or a RAG system, and request broader model selection, diarization, and better integration. Overall interest is tempered by skepticism over security, legality, and usability.
Mesh over Bluetooth LE, TCP, or Reticulum
Summary
Columba is a native Android messaging and voice application built for the Reticulum mesh‑network stack. It exchanges LXMF (Lightweight Extensible Message Format) messages and LXST voice calls without relying on internet, cellular, or central servers. Connectivity options include Bluetooth LE, Wi‑Fi, TCP to any Reticulum server, and LoRa via RNode, enabling communication from local proximity to global ranges. Security is provided by end‑to‑end encryption; users generate and manage cryptographic identities locally, with support for multiple identities, QR‑code sharing, and export/import of keys (including compatibility with other Reticulum clients). The app incorporates Material Design 3, custom colour themes, and offline map support (vector/raster MBTiles) with secure location sharing. Distribution is via the GitHub releases page and optional NomadNet download; APK verification details are in SECURITY.md. Reticulum itself is a low‑bandwidth, high‑latency‑tolerant networking stack capable of operating over diverse media.
Read full article →
Community Discussion
The comments express optimism about emerging offline‑communication projects while noting that current solutions remain insufficient, prompting multiple new efforts. There is a hopeful expectation that a satisfactory option will appear soon. Technical concerns are highlighted, specifically LoRa’s limited bandwidth restricting message size to very short texts. Additionally, there is curiosity about how alternative platforms such as Reticulum compare to existing mesh‑network implementations like Meshtastic or Meshcore, indicating a desire for performance comparisons.
Debian decides not to decide on AI-generated contributions
Summary
Debian opened a February discussion after Lucas Nussbaum posted a draft general resolution (GR) on permitting AI‑assisted contributions. The proposal would allow partially or fully LLM‑generated patches provided contributors disclose the use, tag the change (e.g., “[AI‑Generated]”), and vouch for its technical merit, security, licensing and utility. It also bans using generative‑AI on non‑public or embargoed material.
Debate centered on terminology (AI vs. LLM), the granularity of allowed uses (code review, prototype generation, production code), and the impact on onboarding new contributors. Some argued AI tools could replace junior developers without skill transfer, while others saw potential to lower entry barriers. Copyright concerns were raised regarding training data and output licensing, with suggestions to defer policy until legal clarity emerges. A minority advocated a hard ban, even threatening removal of major upstream packages, but this view lacked support.
Overall, developers agree Debian is not ready for a formal vote; the issue will remain handled case‑by‑case under existing policies while discussions continue.
Read full article →
Community Discussion
Comments reflect a nuanced view of AI‑generated code in open‑source projects. Many acknowledge its productivity gains and accessibility benefits, treating it as a useful assistant when the contributor validates and owns the output. Concerns focus on quality control, reviewer burden, potential copyright or attribution gaps, and the difficulty of distinguishing AI from human contributions as models improve. The prevailing opinion favors responsible integration—transparent disclosure, higher testing standards, and limiting AI use to trusted contributors—rather than outright bans, emphasizing human accountability for any submitted changes.