Television is 100 years old today
Summary
John Logie Baird’s first television demonstrations began in the early 1920s. After a health‑related discharge from WWI, he built a rudimentary transmitter in a Hastings flat (21 Linton Crescent) using a hat‑box, tea‑chest, darning needles and bicycle‑light lenses; the initial image was the shadow of a St John’s Ambulance medal, now displayed at Hastings Museum. In November 1924 he moved to the attic of 22 Frith Street, Soho, where he refined his “Televisor”—an electro‑mechanical system employing a spinning disc with lenses, a shutter, and a light‑sensitive cell. On 26 January 1926 he gave the first public press demonstration, transmitting a ventriloquist’s dummy (later a human subject, William Taynton) and faces between rooms. The Times reported the apparatus as a rotating disc‑based scanner producing varying currents that drove a synchronized receiver to render images on a ground‑glass screen.
Baird’s later work included Phonovision (image recording on 78 rpm discs), Noctovision (infra‑red TV), and 1928 colour and stereoscopic demonstrations. Competing with EMI’s electronic Emitron camera, the 1936 BBC launch alternated Baird’s 240‑line mechanical system and Marconi‑EMI’s 405‑line electronic system; the electronic system proved superior and replaced Baird’s after three months. Baird’s studios were destroyed in the Crystal Palace fire, his company entered receivership during WWII, and he died in 1946, a week after the BBC resumed broadcasts. Plaques at 22 Frith Street commemorate the world’s first public live‑TV demonstration.
Read full article →
Community Discussion
Comments blend nostalgia for early CRT technology and curiosity about alternate designs with criticism of television’s cultural impact. Contributors recount personal experiences with long‑lasting CRTs, note historical milestones, and debate inventors such as Baird, Farnsworth, and Zworykin. Many view modern screens and 24‑hour news as degrading civic discourse, while others appreciate classic programs and the medium’s role in their youth. Overall sentiment is mixed: appreciation for television’s pioneering era coexists with concern over its present‑day influence and a desire for thoughtful alternatives.
ChatGPT Containers can now run bash, pip/npm install packages and download files
Summary
ChatGPT’s container environment has been expanded to run Bash commands and support multiple runtimes beyond Python. It now includes Node.js for JavaScript and can execute “Hello World” programs in Ruby, Perl, PHP, Go, Java, Swift, Kotlin, C, and C++ (Rust is still absent). A new built‑in tool, **container.download**, fetches files from publicly reachable URLs into the sandboxed filesystem, but only after the URL has been shown to the user or retrieved via **web.run**, mitigating injection risks. Package installation is possible via **pip** and **npm** through an internal proxy (`applied‑caas‑gateway1.internal.api.openai.org`), configured by environment variables such as `PIP_INDEX_URL`, `PIP_TRUSTED_HOST`, and `NPM_CONFIG_REGISTRY`. The container remains unable to make unrestricted outbound network requests. The update also provides a full inventory of available tools (e.g., `python.exec`, `web.run`, `container.exec`, `container.download`, automation and Google‑service tools, canvas utilities, image generation, and user settings). These enhancements enable multi‑language coding, external file acquisition, and package management directly within a chat session, though official documentation is still pending.
Read full article →
Community Discussion
The comments discuss a shift toward compiled languages as LLMs become capable of generating code across many ecosystems, noting Go’s fast compilation and binary portability as advantages. There is enthusiasm for expanded tool‑calling and sandboxed execution features in chat models, alongside curiosity about compute limits and integration with package managers. Concerns appear about security, dependency management, and potential misuse such as crypto mining. Opinions are mixed, with some praising the new capabilities and others expressing frustration with regressions, interface quirks, and unclear documentation.
The Hidden Engineering of Runways
Summary
September 2025 saw three U.S. runway‑overrun incidents in which the runway ends crushed under aircraft weight, yet the engineered safety systems prevented fatalities. Runway design balances safety, cost, and site constraints: length is chosen based on the “critical aircraft” and adjusted for temperature, elevation, and runway slope (each 1 % downhill adds 10 % to landing distance). Orientation must provide ~95 % wind coverage for the design aircraft, often requiring perpendicular runways when prevailing winds lack a dominant direction. Surface engineering includes a central crown, drainage, and grooving to mitigate hydroplaning; friction is regularly measured and the surface re‑textured when polishing or rubber buildup reduces grip. Pavement structures consist of subgrade, optional drainage layer, sub‑base, base course, and surface course; materials are either rigid concrete (longer life, higher cost) or flexible asphalt (cheaper, relies on underlying layers for load distribution). Takeoff loads dominate design life, so pavement must sustain high stress cycles. Additional features—displaced thresholds, blast pads, runway safety areas (RSAs), and Engineered Materials Arresting Systems (EMAS) made of crushable concrete or foamed glass—provide clearance and energy‑absorption for overruns.
Read full article →
Community Discussion
The comments convey strong, consistently positive sentiment toward the channel’s engineering‑focused videos, highlighting the clear, no‑nonsense explanations, inclusion of full transcripts, and high production quality. Viewers repeatedly mention learning about runway systems such as EMAS, runway stress patterns, lighting design, and practical project experiences from university collaborations with airports. There is a shared desire for more content of similar depth and relevance, alongside brief notes that broader YouTube recommendations have declined, while a few users express curiosity about specific technical details like runway illumination.
Any application that can be written in a system language, eventually will be
Summary
The post revisits Atwood’s Law—“any application that can be written in JavaScript will eventually be written in JavaScript”—and proposes a new corollary: “Anything that can be written in a systems language will eventually be written in a systems language by an LLM.” The author argues that, as of 2026, economic pressures (serverless billing, energy costs) and AI assistance have shifted development from interpreted languages (Python, Ruby, JavaScript) toward compiled languages such as Rust and Go. Benchmarks show Python can be up to 70 × less energy‑efficient than Rust, and Go/Rust can deliver tenfold higher throughput on identical hardware. Historically, steep learning curves limited adoption of Rust/Go, but large language models now mitigate these barriers by handling syntax and compiler constraints, making “vibe coding” safer in strict languages. The author cautions against wholesale rewrites of existing systems but suggests that new greenfield projects should prioritize Rust/Go for performance‑critical components, while retaining Python for AI‑focused orchestration.
Read full article →
Community Discussion
The comments express mixed attitudes toward language choice and tooling. Many highlight a preference for fast compilation and straightforward development cycles, criticizing Rust’s borrow checker and Go’s performance relative to C while praising Zig’s speed and metaprogramming. Skepticism is voiced about predictions that most code will be generated by LLMs, viewing such claims as unrealistic for serious companies. At the same time, users acknowledge LLMs as valuable assistants for navigating complex Rust errors and suggest that AI‑driven code generation could eventually fill gaps like a comprehensive web framework for Rust.
AI code and software craft
Summary
The essay argues that AI has amplified the production of low‑quality, metric‑driven content by reducing the effort required to generate “good enough” output. Drawing on Jacques Ellul’s notion of “technique,” it describes how platforms prioritize engagement and revenue over craft, citing Spotify’s algorithmic playlists versus Bandcamp’s album‑focused model as an example. In software, large‑tech firms are portrayed as “plumbing” operations that produce bloated, poorly designed systems, with engineers confined to narrow, rote tasks. AI agents can automate such repetitive work, but they lack understanding, often generate verbose or buggy code, and cannot replace the broader skill set and critical thinking required for high‑quality software. The author calls for a revival of craftsmanship in computing, referencing the Arts and Crafts movement and suggesting exploration of historic programming paradigms (e.g., Forth, Unix). While AI may increase mass‑produced software, it could also free space for human‑centric, experimental projects that emphasize genuine engineering craft.
Read full article →
Community Discussion
Comments portray AI coding tools as practical for routine tasks like boilerplate and glue code, while noting persistent weaknesses in system‑level reasoning, security, and edge‑case handling. Opinions diverge between those who view the technology as a productivity enhancer that reshapes engineering roles and those who fear it erodes craftsmanship and deep judgment. Skepticism also extends to broader societal impacts, including job displacement and political ramifications. Overall, the consensus acknowledges AI’s utility but stresses the need for careful integration, quality oversight, and awareness of its limits.
There is an AI code review bubble
Summary
Greptile notes a rapid expansion of AI‑driven code review tools, listing major players (OpenAI, Anthropic, Cursor, Augment, Cognition, Linear) and pure‑review agents (Greptile, CodeRabbit, Macroscope, YC startups). Its differentiation rests on three pillars:
- **Independence:** Greptile separates the reviewer from any code‑generation function, refusing to ship a coding agent and arguing that an auditor should not author the code it validates.
- **Autonomy:** The service is built as a background automation (“pipes”) without a dedicated UI, aiming for near‑full automation of code validation—review, test, and QA—so human engineers focus on design and intent.
- **Feedback loops:** Integration with Claude Code lets an LLM fetch and resolve Greptile comments iteratively, looping until no new issues appear; ambiguous cases trigger Slack alerts to the human.
Greptile envisions a workflow where a human issues a ticket, a coding agent creates a PR, and an independent review agent approves and merges it. It cites adoption by enterprise customers, including two of the “Mag7,” and positions its approach as a long‑term, low‑switch‑cost solution in a market where code‑validation automation is expected to dominate.
Read full article →
Community Discussion
The comments express mixed views on AI‑driven code review. Many users report low signal‑to‑noise ratios, limited contextual understanding, and redundancy with existing linters or human reviewers, seeing the tools as premature or marketing‑driven. Some note occasional bug detection that humans miss, but overall trust remains low, with concerns about over‑reliance, integration hurdles, and loss of nuanced judgment. A few highlight successful integrations or specific use cases, while the dominant sentiment questions the current value, feasibility, and strategic focus of such products.
JuiceSSH – Give me my pro features back
Community Discussion
Comments convey widespread disappointment with JuiceSSH’s recent decline, highlighting unresponsive support, broken pro‑feature purchases, and the disappearance of cloud‑sync services, which many interpret as neglect or possible abandonment. Users recall the app’s former status as a top Android SSH client but note its degraded functionality and missing updates. Consequently, several recommend switching to alternatives such as Termux, Termius, ConnectBot, or the newer Android Terminal app, while expressing concerns about the security of stored keys and the lack of source code. A minority remains cautiously optimistic that the developers may resume activity.
Apple introduces new AirTag with longer range and improved findability
Summary
Apple announced the second‑generation AirTag, retaining the $29 single‑unit and $99 four‑pack pricing. Key upgrades include a new Ultra‑Wideband (UWB) chip (shared with iPhone 17, iPhone Air, Watch Ultra 3, and Watch Series 11) that extends Precision Finding range by up to 50 % and adds support on Apple Watch Series 9/Ultra 2 or later. An upgraded Bluetooth chip lengthens overall detection distance, while a redesigned speaker is 50 % louder, audible from roughly twice the previous range.
The device integrates with the Find My network for crowdsourced location reporting and introduces “Share Item Location,” allowing users to temporarily share an AirTag’s position with authorized partners (e.g., airlines). Early data from SITA indicate a 26 % drop in baggage delays and a 90 % reduction in permanently lost luggage when this feature is used.
Privacy remains protected through end‑to‑end encryption, frequent Bluetooth identifier rotation, and no on‑device storage of location history. The AirTag is built with 85 % recycled plastic, 100 % recycled rare‑earth magnets, and recycled gold on its PCB, and remains compatible with existing accessories such as the FineWoven Key Ring. Minimum requirements are iOS 26/iPadOS 26 and an iCloud‑linked Apple ID.
Read full article →
Community Discussion
Comments reflect mixed but generally favorable views of AirTags, highlighting their effectiveness in recovering lost or stolen items, ease of use, affordable price and recent environmental improvements. Recurrent criticisms include the anti‑stalking alerts limiting theft‑prevention usefulness, the unchanged small form factor restricting placement on cameras or wallets, lack of Android compatibility, and concerns about speaker permanence and privacy misuse. Users also request longer range, louder alerts, alternative shapes, and a dedicated anti‑theft mode, while noting variable police responsiveness and occasional annoyance from constant beeping.
RIP Low-Code 2014-2025
Summary
Low‑code platforms were created to let non‑technical users build production‑ready applications with minimal code, freeing developer capacity and accelerating delivery. Forrester projected the market to reach $50 billion by 2028, and adoption has been strong. The emergence of AI‑driven, “agentic” coding tools changes the ROI calculus: AI can generate functional code faster and cheaper than integrating and maintaining external low‑code solutions, while avoiding the extra total‑cost‑of‑ownership those platforms impose. Cloud Capital’s experience illustrates this shift—initially reliant on Retool for internal dashboards and workflows, the team prototyped a standalone tool using AI‑assisted coding, found it faster, more maintainable, and better aligned with their product UI, and subsequently migrated all admin tooling away from Retool within a few sprints, achieving cost and velocity gains. Low‑code vendors are responding by adding AI features, but it remains uncertain whether this will preserve market share. Build‑vs‑buy decisions now focus on speed, financial and maintenance costs, vendor lock‑in, and organizational complexity, with AI‑enabled in‑house development increasingly favored.
Read full article →
Community Discussion
Comments reflect a nuanced view of low‑code’s future. Many acknowledge its continued relevance for non‑technical users, citing visual guardrails, easier deployment, and reduced maintenance overhead. Others argue that AI‑driven agents will blur the line between low‑code and code generation, potentially diminishing the need for dedicated platforms while raising concerns about brittleness, token costs, and long‑term security. A common thread is that low‑code is expected to evolve rather than disappear, integrating with AI tools to improve productivity, yet its value will depend on how well it balances flexibility, stability, and operational simplicity.
People who know the formula for WD-40
Community Discussion
Comments converge on the view that WD‑40’s “secret” formula is essentially a known mixture of mineral oils and hydrocarbons, making its marketing of exclusivity appear as fluff. Users acknowledge it effectively displaces water and removes rust but criticize its short‑term lubrication, evaporation, and tendency to concentrate grime, recommending specialized lubricants such as silicone oil, lithium grease, PTFE‑based products, or penetrating oils for better results. The consensus also notes that the recipe can be reverse‑engineered, that brand trust drives sales, and that alternatives generally perform as well or better.