John Ternus to become Apple CEO
Summary
Tim Cook will transition from CEO to Executive Chairman of Apple’s board on September 1 2026, remaining CEO through the summer to assist with the hand‑over. John Ternus, senior vice president of Hardware Engineering, will assume the CEO role on that date and join the board. Arthur Levinson, longtime non‑executive chairman, will become lead independent director. Cook joined Apple in 1998, became CEO in 2011, and oversaw growth from roughly $350 billion to $4 trillion in market cap, revenue from $108 billion (FY 2011) to over $416 billion (FY 2025), expansion to 200+ countries, 500+ retail stores, and a 2.5 billion‑device installed base. He led the launch of Apple Watch, AirPods, Vision Pro, and expanded services to exceed $100 billion. Ternus, at Apple since 2001, led hardware engineering for iPad, AirPods, iPhone, Mac, and Apple Watch, introducing products such as MacBook Neo, iPhone 17 series, and advanced recycled‑material components. Both executives emphasize continuity, innovation, and sustainability.
Read full article →
Community Discussion
Comments show broad respect for Tim Cook’s supply‑chain success and Apple’s growth, alongside consistent criticism of recent software quality and service‑driven features. Many view John Ternus’s hardware background as a positive shift, hoping his leadership will revive software innovation, improve developer relations, and maintain privacy commitments. Skepticism appears about the timing of the transition, potential stagnation in product velocity, and whether a hardware‑focused CEO can address broader software and AI challenges. Overall sentiment mixes optimism for renewed engineering focus with concerns about past software shortcomings.
How to make a fast dynamic language interpreter
Summary
The post describes a systematic optimization of a simple AST‑walking interpreter for the dynamic language Zef, improving performance from 35× slower than CPython to competitive with Lua, QuickJS, and CPython. Initial design used a 64‑bit tagged value, recursive evaluation, std::string keys, and hash‑table‑based scopes, yielding severe overhead. Optimizations applied include:
1. Parser‑generated dedicated AST nodes for each operator and RMW operator, eliminating string‑based dispatch (≈17.5 % and 3.7 % gains).
2. Removing unnecessary IntObject checks (1 %).
3. Replacing strings with hash‑consed Symbol pointers (18 %).
4. Inlining value functions via a separate header (2.8 %).
5. Overhauling the object model with pre‑computed storage offsets, inline caches, and watchpoints, reducing allocation and lookup costs (4.55× faster).
6. Streamlining argument handling, getter/setter specialization, callMethod inlining, and global method lookup tables (cumulative ~20 % gains).
7. Eliminating std::optional allocations, specializing argument types, improving slow‑path value handling, and specializing sqrt, toString, and array literals (additional 1–8 % each).
8. Build‑time tweaks (disable RTTI, hardening, asserts) added ~1–2 % each.
Overall, the interpreter achieved a 16.6× speed‑up, becoming only ~2× slower than CPython and within 20 % of Lua/QuickJS, with further gains possible using a more efficient GC.
Read full article →
Community Discussion
The comments express overall curiosity and mild enthusiasm about the project’s language support and composition. Users note the inclusion of Lua and wish for LuaJIT, appreciate the high proportion of HTML versus C++ in the repository as indicating a small interpreter footprint, and indicate interest in exploring the topic further. There is also a request for practical feedback on Fil‑C’s usefulness, without any notable criticism or disagreement.
Jujutsu megamerges for fun and profit
Summary
Jujutsu’s “megamerge” workflow uses an octopus merge (a commit with three + parents) to combine every branch a developer cares about—feature branches, bug‑fixes, pending PRs, local setup branches, etc.—into a single, local “megamerge” commit. The megamerge itself is never pushed; only its constituent branches are published. Because all work is built on this combined state, the codebase compiles and runs only if all parts interoperate, dramatically reducing surprise merge conflicts and making task switching frictionless.
Creating a megamerge is simple: `jj commit --message "megamerge"` with each target branch as a parent, yielding an empty commit atop the merged history. New work is done above this commit. Changes are integrated using Jujutsu’s `absorb` (auto‑squash into downstream commits), `squash` (interactive or full), or `rebase` to move WIP commits onto appropriate branches.
Convenient revset aliases streamline the process:
* `closest_merge(to)` finds the nearest merge ancestor.
* `stack` rebases a revset after `trunk()` and before the megamerge.
* `stage` stacks the entire descendant chain after the megamerge.
* `restack` rebases only mutable commits onto `trunk()` (`roots(trunk()..) & mutable()`).
The workflow keeps the megamerge up‑to‑date, isolates personal changes from others’ branches, and enables rapid, low‑conflict development across many parallel streams.
Read full article →
Community Discussion
The comments convey strong enthusiasm for Jujutsu, noting its low‑risk entry point, streamlined megamerge workflow, and supportive Discord community, with many users recommending exclusive adoption after a short trial. Frequent praise highlights features such as parallelize, stack aliases, and the absence of a staging area, while calls for improved documentation, beginner guides, and tighter IDE integration appear. Concerns focus on managing complex megamerge conflicts, coordination with non‑jj collaborators, and missing Git‑like capabilities such as tags and preserved commit dates. Overall sentiment is largely positive, tempered by requests for better tooling and resources.
Qwen3.6-Max-Preview: Smarter, Sharper, Still Evolving
Community Discussion
Comments reflect a mixed view of the coding‑assistant landscape, emphasizing practical performance over headline benchmarks. Users report that models such as GLM‑5.1 and Qwen often outperform higher‑rated alternatives in real coding tasks, while noting that Claude and Gemini can produce misleading output for specialized work. Cost emerges as a central concern, with many favoring affordable open‑weight or locally run models despite hardware limits. Skepticism about benchmark relevance, pricing hikes from Chinese providers, and limited access to cloud‑only versions also appear repeatedly. Overall, the discussion prioritizes utility, affordability, and openness over raw leaderboard scores.
Kimi vendor verifier – verify accuracy of inference providers
Summary
The Kimi Vendor Verifier (KVV) is an open‑source tool released alongside the Kimi K2.6 model to help users confirm that their inference implementations match official behavior. It addresses frequent benchmark anomalies caused by incorrect decoding parameters and broader infrastructure inconsistencies across diverse deployment channels. KVV comprises six targeted benchmarks: Pre‑Verification (enforces temperature = 1.0, top p = 0.95), OCRBench (5‑minute multimodal smoke test), MMMU Pro (vision input preprocessing), AIME2025 (long‑output stress test for KV‑cache and quantization issues), K2VV ToolCall (F1 and JSON‑schema accuracy), and SWE‑Bench (full agentic coding test, not open‑sourced). The project collaborates with vLLM, SGLang, and KTransformers to fix root‑cause bugs rather than merely detect symptoms. Validation is performed pre‑release, and results are published on a public leaderboard to encourage vendor accountability. Full evaluation on two NVIDIA H20 8‑GPU servers took ~15 hours sequentially, with scripts optimized for streaming, retries, and checkpoint resumption. Vendors are invited to join; contact is provided.
Read full article →
Community Discussion
The response expresses overall approval of a vendor‑provided verification system, viewing it as a needed safeguard against hidden quantization changes and performance shortfalls across inference providers. It highlights specific problems such as AWS Bedrock’s tool‑call failures and other services misleading users about model versions, noting that current tests may not detect intentional cheating. While acknowledging the difficulty of reproducing long, resource‑intensive benchmarks, the commentary emphasizes the widespread concern that advertised model behavior often diverges from actual output, and supports broader adoption of standardized verifiers.
Ternary Bonsai: Top Intelligence at 1.58 Bits
Summary
PrismML announced Ternary Bonsai, a family of 1.58‑bit language models (8 B, 4 B, 1.7 B parameters) that use ternary weights {-s, 0, +s} encoded as {-1, 0, +1} with a shared FP16 scale per 128‑weight group. This representation yields a memory footprint ~9× smaller than standard 16‑bit models while maintaining full‑network quantization (embeddings, attention, MLPs, LM head). Benchmarking shows the 8 B model scores 75.5 average, 5 points higher than the 1‑bit Bonsai 8 B (70.5) and approaches Qwen3 8 B performance, despite using only 1.75 GB (vs 16.38 GB). Throughput reaches 82 tokens/s on Apple M4 Pro and 27 tokens/s on iPhone 17 Pro Max, with energy use of 0.105 mWh/tok (M4 Pro) and 0.132 mWh/tok (iPhone). Models run natively on Apple devices via MLX, are released under Apache 2.0, and extend the Pareto frontier for capability versus size, offering a trade‑off between the ultra‑compact 1‑bit Bonsai and higher‑quality models.
Read full article →
Community Discussion
The comments highlight enthusiasm for the ternary‑Bonsai model’s speed, low memory use and strong benchmark results, especially its efficiency on edge hardware and the absence of multiplications during inference. Many note that accuracy‑per‑byte compares favorably to larger 16‑bit models, while also questioning why comparisons are limited to unquantized baselines rather than other low‑bit quantizations. Users express interest in scaling the approach to larger parameter counts, curiosity about practical deployment constraints such as KV‑cache benefits, and some critique the model’s overly literal responses.
Soul Player C64 – A real transformer running on a 1 MHz Commodore 64
Summary
A 2‑layer decoder‑only transformer (≈25 k int8 parameters) has been implemented in hand‑written 6502/6510 assembly and runs on an unmodified 1 MHz Commodore 64. The model uses 4 attention heads (8‑dimensional each), 32‑dimensional token embeddings, and a 64‑unit feed‑forward network. All activations are Q8.8 fixed‑point (int16); weights are int8 with per‑tensor power‑of‑2 shift scaling, and biases are int16 pre‑scaled. Multi‑head causal self‑attention, softmax (via a 128‑entry exponent lookup table with a 14‑bit score shift), and RMSNorm are fully integer‑based; matrix‑vector multiplication is realized with shift‑and‑add because the 6502 lacks a multiply instruction. The model occupies ~6 KB code/token tables, ~25.3 KB weights, and ~5.8 KB activation buffers, fitting on a floppy disk with room to spare. Inference takes ~60 s per token, producing a SID beep per token. Training employs quantization‑aware training with fake‑quantized int8 weights and bias scaling, label smoothing, and periodic integer‑forward evaluation to select checkpoints based on int8 argmax accuracy. The repository includes scripts for training, building the C64 binary, and testing the full pipeline.
Read full article →
Community Discussion
The comments show mixed reactions to the tiny 25 k‑parameter model. Many note that its output is largely nonsensical or limited to simple greetings, questioning whether the architecture can function meaningfully at such a scale. At the same time, there is curiosity about the possibility of running language models on vintage hardware, with nostalgic references to the C64 and speculation about efficiency gains. Some express disappointment over missing low‑level code, while others view the experiment as an amusing technical curiosity rather than a practical breakthrough.
ggsql: A Grammar of Graphics for SQL
Summary
ggsql is an alpha‑release library that implements the grammar of graphics using pure SQL syntax. Plots are defined with a VISUALIZE clause that maps table columns to aesthetics (e.g., x, y, color), followed by DRAW layers (point, bar, histogram, smooth, boxplot, etc.) and optional PLACE annotations. SCALE clauses translate data values to visual properties (e.g., color palettes), and LABEL adds titles and axis labels. The SQL part of the query runs on the backend (DuckDB in the examples) and streams only the aggregated results needed for each layer, avoiding materialization of full datasets. ggsql targets SQL‑centric analysts, offering a declarative, composable interface that aligns with the grammar of graphics and works in environments like Quarto, Jupyter, Positron, and VS Code without requiring R or Python runtimes. Benefits include lightweight embedding, sandboxing, and efficient large‑scale data handling. Future plans mention a Rust‑based writer, theming, interactivity, a language server, and spatial data support, while ggplot2 development will continue unchanged.
Read full article →
Community Discussion
Comments express enthusiasm for ggsql’s ability to create visualizations directly from SQL‑like queries, noting its usefulness for users without R or Python expertise and its potential to streamline data pipelines. Reviewers appreciate the familiar grammar, layering approach, and Vega‑Lite rendering, while many request clearer documentation on output formats, export options, and integration with existing ggplot2 extensions or other tools such as dbt, DuckDB, and Grafana. Concerns include the added DSL complexity, limited backend support, and desire for richer rendering and deployment options.
Japan's Cherry Blossom Database, 1,200 Years Old, Has a New Keeper
Community Discussion
The discussion expresses surprise that the honor received little attention, viewing the muted reaction as unexpected for a notable achievement. Contributors attribute the limited engagement to insufficient promotion or marketing, suggesting that better outreach might have generated more interest. There is also a perception that the scientist should have maintained an apprenticeship or mentorship role, implying that a lack of such continuity contributed to the perceived shortfall in response.
Quantum Computers Are Not a Threat to 128-Bit Symmetric Keys
Summary
Quantum computers threaten asymmetric primitives (ECDH, RSA, ECDSA, EdDSA) via Shor’s algorithm, but they do not compromise symmetric schemes such as AES, SHA‑2, or SHA‑3. The common claim that Grover’s algorithm halves symmetric security—requiring 256‑bit keys for 128‑bit security—is incorrect because:
- Grover’s quadratic speed‑up requires a long, serial quantum circuit; parallelization only partitions the key space, degrading the advantage.
- Realistic estimates (1 µs gate time, 10‑year run) show breaking AES‑128 would need ~1.4 × 10¹⁴ quantum processors each with ~724 logical qubits, yielding a depth × width cost far exceeding that of breaking 256‑bit elliptic curves with Shor’s algorithm.
- NIST and the German BSI both classify AES‑128 (and larger) as Category 1 post‑quantum secure and keep all AES key sizes allowed through 2035.
- NIST’s FAQ and IR 8547 stress that Grover’s practical impact is limited; no guidance advises doubling symmetric key lengths now.
- Consequently, the post‑quantum transition should focus on asymmetric algorithms, while AES‑128, AES‑192, and AES‑256 remain safe for the foreseeable future.
Read full article →
Community Discussion
The comments explore practical limits of larger symmetric key sizes, noting that extending AES beyond standard lengths offers marginal security gains while incurring performance costs, and that quantum threats such as Grover’s algorithm remain theoretical due to extreme resource requirements. Opinions converge on aggressive key rotation as a pragmatic mitigation, especially for elliptic‑curve signatures vulnerable to Shor’s algorithm. Discussions also address the feasibility of vastly increasing RSA/ECC key lengths, acknowledging diminishing returns and implementation challenges, while praising clear technical explanations of these concepts.