HackerNews Digest

December 31, 2025

I canceled my book deal

The author, an associate teaching professor at Carnegie Mellon, negotiated a technical programming‑project book with a major publisher (2023). The contract specified 115‑500‑word manuscript, 350‑400 pages, 10‑30 illustrations, a $5,000 advance (split on milestones) and royalties of 12 % (print/e‑book ≤ 7,000 copies) rising to 15 % thereafter, plus 50 % on foreign translations. Benefits cited for publishers included structured progress, logistics, distribution, and credibility; drawbacks were frequent nudging, editorial control, low royalties, and limited marketing. Drafts were required every 3–4 weeks, using AsciiDoc/Word, with an editor enforcing a style guide and repeatedly urging simplification and an introductory Python chapter. After ChatGPT’s release, the publisher insisted on AI content, which the author rejected. Persistent deadline misses, editorial turnover, and personal events (job change, wedding) led to a “freeze” request and eventual contract termination, returning rights to the author. He now considers self‑publishing or posting the material as blog posts.
Read full article →
The comments reveal mixed views on traditional technical publishing versus self‑publishing. Many authors appreciate professional editing, credibility, and potential reach from established houses, yet criticize limited marketing, low royalties, and pressure to add AI‑related content that can compromise a book’s focus. Several contributors highlight successful experiences with publishers who respected author control, while others report strained negotiations, missed deadlines, or forced pivots toward AI trends. Overall, there is a growing inclination toward self‑publishing for greater autonomy, balanced against the value of publisher resources when aligned with the author’s vision.
Read all comments →

Privacy and control. My tech setup

The author argues that “privacy” is often misunderstood and should be reframed as “control” over digital identity, emphasizing that most users’ threat models focus on who can influence their information consumption, advertising exposure, and political persuasion. Based on a personal threat model, the author recommends a set of tools and practices: - **Password management:** Use a self‑hosted solution such as GNU pass to avoid third‑party storage; Bitwarden is suggested for a richer UI. - **Messaging:** Prefer Signal over platforms like WhatsApp; Venmo is disabled. - **Mobile OS:** Run GrapheneOS on Android to sandbox apps, restrict permissions, and optionally disable the Play Store and location services, which also improves battery life. - **Email:** Operate a personal domain (e.g., [user]@example.com) and use Tuta for secure, affordable email without self‑hosting. - **Web browsing:** Firefox with Privacy Badger and uOrigin to block tracking; social media accessed only via containerized browsers. - **Calendars/contacts:** Self‑host CalDAV on a Raspberry Pi (sabre.io/baikal) and sync via DAVx⁵. - **Domain registration & DNS:** Use Cloudflare Registrar for lower renewal costs and Cloudflare’s 1.1.1.1 DNS, citing perceived alignment of incentives. The post cites recent articles on Facebook’s covert Snapchat monitoring and Meta/Yandex de‑anonymizing Android browsing as context for the security concerns.
Read full article →
The comments express cautious support for privacy‑focused mobile OSes such as GrapheneOS, noting practical obstacles like incompatibility with apps that rely on Google Integrity APIs and the risk of losing access to essential services, especially in jurisdictions with digital‑only government requirements. Participants acknowledge the need to evaluate workflow breakage before switching and appreciate users who adopt such systems. Skepticism is voiced toward large infrastructure providers, particularly Cloudflare, with concerns about control and potential abuse. Alternative tools such as NetGuard and uBlock Origin are mentioned as viable compromises.
Read all comments →

The compiler is your best friend

The script explains how compilers transform source code—parsing, type‑checking, optimizing, and generating output—and why treating them as allies prevents runtime failures. It contrasts ahead‑of‑time compilation (e.g., Rust’s ownership and borrow checking that eliminate memory‑safety bugs) with just‑in‑time compilation in Java, where bytecode is later optimized by the JVM, and transpilation in TypeScript, which adds a structural type system to JavaScript while preserving gradual typing. The discussion highlights self‑hosting and bootstrapping as milestones for language maturity. The second part critiques common “lies” developers tell compilers: treating nullable values as non‑null, ignoring unchecked exceptions, and using unsafe casts. These practices hide runtime hazards from static analysis, leading to NullPointerExceptions, unexpected exceptions, and brittle code. Emphasizing accurate compile‑time information—precise types, nullability annotations, and avoiding casts—lets the compiler enforce safety and improves reliability across codebases of any size.
Read full article →
Comments emphasize a desire to separate pure business logic from side‑effects, noting functional‑core‑imperative‑shell designs are conceptually appealing yet often hard to apply in real code. Opinions on error handling split between using asserts and logging versus letting programs crash, with many viewing explicit checks as basic practice. Rust’s safety and memory management receive praise, though its “zero‑cost” claim and broader hype are questioned, while C++’s standard library is seen as mitigating memory pain. Strong typing and compiler checks are valued, but growing toolchain complexity, documentation gaps and perceived bloat generate frustration.
Read all comments →