Sugar industry influenced researchers and blamed fat for CVD (2016)
Summary
- Internal sugar‑industry documents from the 1950s‑60s reveal a strategic effort to shift the narrative on coronary heart disease from sucrose to dietary fat and cholesterol.
- The industry anticipated that low‑fat diets would boost per‑capita sugar consumption by >33 % and therefore commissioned “Project 226,” a literature review conducted by Harvard School of Public Health nutritionists, including D. Mark Hegsted.
- Funded at the 2016‑equivalent of $50 000, the industry set the review’s objectives, supplied selected articles, and received drafts, yet the sponsorship was undisclosed in the 1967 New England Journal of Medicine publication.
- The review dismissed studies linking sucrose to heart disease, emphasized cholesterol as the sole risk factor, and minimized the role of triglycerides, thereby influencing public opinion and scientific consensus toward saturated‑fat reduction.
- UCSF researchers examined >340 documents (1,582 pages) involving Harvard scientists and chemist Roger Adams, concluding that industry‑driven bias underscores the need for conflict‑free reviews and full financial disclosure.
- Contemporary evidence now links added sugars to hypertension and cardiovascular disease, but health‑policy guidance remains inconsistent.
Read full article →
Community Discussion
Comments converge on skepticism toward industry‑funded nutrition research and the shifting focus from saturated fat to sugar, noting that both excess sugar and saturated fat are linked to cardiovascular risk. Many cite personal experiences of health improvement after reducing refined sugar or refined carbs, while others emphasize whole‑food patterns such as the Mediterranean diet and adequate protein. There is general agreement that transparent, multi‑source evidence is needed and that current dietary guidelines are viewed with mixed trust, prompting calls for balanced, evidence‑based recommendations.
Tailscale state file encryption no longer enabled by default
Summary
Tailscale 1.42.0 is the final release supporting Windows 7, Windows 8, Windows Server 2008, and Windows Server 2012; later versions will not install on these systems, though security updates for 1.42.0 are promised through at least 31 May 2024. macOS 10.13 users are advised to upgrade directly to version 1.44.0.
Key changes across platforms:
- **All platforms:** Added `tailscale serve reset` command to clear current serve configuration; internal DNS handling revised for mixed global and private DNS servers.
- **Linux:** Added SSH login support on systems lacking the `getent` utility.
- **Windows:** Switched to a new application‑signing certificate valid through 2025; notification icons refreshed.
- **macOS:** Updated Sparkle updater frequency; Taildrop now delivers incomplete files.
- **iOS:** “Delete Account” button now redirects to the admin panel; improved memory handling to stay below the 50 MiB limit.
- **Unraid:** Added support for Unraid as a NAS platform, comparable to Synology and QNAP.
- **Kubernetes:** Introduced support for `priorityClassName`.
Read full article →
Community Discussion
The discussion centers on Tailscale’s decision to disable node‑state encryption and TPM attestation by default. Engineers explain the change was driven by frequent TPM failures on diverse hardware, which created heavy support load and unexpected breakages. Users express disappointment, citing the feature’s security value and describing the default off‑switch as a foot‑gun that caused crashes, while also seeking assurance that the issue will be resolved and the option re‑enabled. Overall sentiment combines technical justification with frustration over lost protection and implementation inconvenience.
Fighting back against biometric surveillance at Wegmans
Community Discussion
Comments express strong skepticism toward facial‑recognition and biometric tracking in grocery stores, arguing that cameras and data‑sharing are already pervasive and difficult to avoid even with masks or cash payments. Many cite broader surveillance contexts such as airports, law enforcement, and corporate data collection, and suggest legal limits on data retention rather than outright opt‑outs. Some note the shift to online shopping and remote work reducing in‑store visits, while others dismiss suggested privacy‑friendly alternatives as ineffective, highlighting the dominance of corporate and governmental adoption of the technology.
Eat Real Food
Summary
The “Eat Real Food” guidance emphasizes a diet centered on whole, nutrient‑dense foods. It calls for each meal to include high‑quality protein from animal (e.g., steak, chicken, salmon, ground beef, canned tuna, eggs) and plant sources, with a suggested intake of roughly 0.54–0.73 g protein per pound of body weight daily. Healthy fats should come from whole‑food sources such as eggs, seafood, meats, full‑fat dairy, nuts, seeds, olives, and avocados. The accompanying image list illustrates a broad range of food groups: proteins (meat, fish, dairy, legumes), fats (oil, nuts, seeds, butter), vegetables (broccoli, lettuce, tomatoes, peas, carrots, green beans, butternut squash), fruits (berries, bananas, apples, oranges, grapes), grains (bread, oats), and reference charts (food pyramid, dietary guidelines). The overall message is to prioritize real, minimally processed foods across all macronutrient categories.
Read full article →
Community Discussion
Comments show a mixed but generally favorable reaction to the new dietary guidelines. Many appreciate the emphasis on protein, whole‑grain and vegetable intake, reduced sugary‑drink eligibility and clearer macro targets, noting personal health benefits and alignment with earlier medical advice. Simultaneously, criticism centers on perceived industry lobbying, the prominence of animal protein over plant alternatives, vague definitions of “highly processed” foods, and the complexity of the presentation. Several callers urge stronger regulation of processed foods and express skepticism toward government nutrition advice given special‑interest influence. Overall sentiment leans positive yet tempered by concerns about bias and implementation.
Musashi: Motorola 680x0 emulator written in C
Summary
GitHub repository “kstenerud/Musashi” is a C implementation of a Motorola 680x0 CPU emulator. The page content is limited to a permission error message (“You can’t perform that action at this time”) and a series of image placeholders with alt text showing various usernames. No further technical details, code excerpts, or documentation are present in the provided text.
Read full article →
Play Aardwolf MUD
Summary
Aardwolf MUD is a free, text‑based role‑playing game set in the fantasy world of Andolor, featuring multiple continents, real geography and a real‑time line‑of‑sight overhead map. Players create characters from 28 classes—ranging from melee fighters (Soldiers, Knights, Barbarians) to magic users (Elementalists, Necromancers, Priests)—and choose race, guild, and profession. Gameplay supports solo or group play, offering hundreds of quests, puzzles, exploration, in‑game casino, clan participation, player‑vs‑player combat, global quests, crafting, enchantments, and private manors. An extensive in‑game help system and volunteer “helpers” guide newcomers, beginning with the “Aylorian Academy” starter area. Developers can use an embedded Lua interpreter to script area atmospheres, puzzles, and quests. Connection is available via the host aardwolf.org (23.111.142.226) on port 4000 (or port 23 if needed); a Java client is also provided. Support is reachable at [email protected].
Read full article →
Community Discussion
The comments express nostalgia for MUDs, crediting them for learning coding, typing, and community connections. Many recall specific games such as Abandoned Reality, MUME, Aardwolf, and others, highlighting positive experiences and lasting memories. Some note a decline after the early 2000s and criticize poor administrative changes that altered features or reduced player interaction. Overall sentiment remains supportive, urging preservation of the genre while acknowledging challenges and mixed experiences with recent management.
Shipmap.org
Summary
The Shipmap.org visualisation displays global merchant‑fleet movements for 2012 on a bathymetric base map. Data combine exactEarth AIS location/speed records with Clarksons Research static vessel information, cross‑checked by UCL Energy Institute to add engine, hull, and vessel characteristics. CO₂ emissions per hour are calculated using the IMO Greenhouse Gas Study 2014 methodology. Users can pan, zoom, and animate ship positions via a timeline, toggle layers (ports, routes, background), and filter or colour ships by five categories: Container (20‑ft slot count), Dry bulk, Tanker, Gas bulk (cubic metres), and Vehicles (all measured in thousand tonnes). Counters show hourly CO₂ (thousands of tonnes) and freight capacity. The map uses WebGL, a custom GEBCO‑2014 bathymetry grid, and Natural Earth land/river data. First‑quarter data are incomplete, causing fewer early‑year ships. High‑resolution print versions are available, and embedding is permitted with attribution to Kiln and UCL. Funded by the European Climate Foundation.
Read full article →
Community Discussion
The comments are overwhelmingly positive, describing the visualization as striking, informative, and enjoyable, while highlighting its ability to reveal major shipping routes, seasonal port closures, geopolitical hubs, and economic impacts. Viewers note the usefulness of real‑time or newer data, suggestions for globe‑based projections, clearer land coloring, and integration with commodity or climate information. Several remarks discuss regulatory and environmental effects, such as fuel sulfur limits and emerging Arctic routes, and many request updated datasets or interactive features, but overall appreciation dominates.
NPM to implement staged publishing after turbulent shift off classic tokens
Community Discussion
The comments express widespread concern about npm’s current supply‑chain security model, highlighting limitations of Trusted Publishing such as narrow CI support, lack of first‑publish capability, and absent mandatory 2FA enforcement. Contributors report operational pain when adopting Trusted Publishers and call for human‑in‑the‑loop verification, better dependency analysis tools, and CLI updates to handle 2FA. Comparisons to Java’s namespace‑based ecosystem suggest a desire for stronger naming authority, while some advocate for larger, governed packages or a shift toward distro‑style package management to reduce reliance on numerous small, unaudited modules.
The Q, K, V Matrices
Summary
The attention mechanism in transformers relies on three learned linear projections of the input embedding matrix X: Query (Q = X·W_Q), Key (K = X·W_K), and Value (V = X·W_V). Each weight matrix W_Q, W_K, W_V has shape (d_model, d_k) (e.g., 4 × 3 in a toy example) and is initialized randomly but trained via back‑propagation. Q vectors encode “questions” for each token, K vectors act as searchable indices, and V vectors contain the information to be passed forward. Attention scores are computed as S = (Q · Kᵀ)/√d_k, yielding a (seq_len × seq_len) matrix that quantifies pairwise token relevance; after softmax, these scores weight the V matrix to produce the self‑attention output. The dimension d_k controls capacity: smaller d_k reduces compute and memory but limits expressiveness, while larger d_k (e.g., 64 or 512) enables richer relationships and is typical per head in multi‑head attention (e.g., BERT’s 12 heads → 768 dim). Separate weight matrices preserve distinct functional roles for queries, keys, and values, analogous to a database lookup system.
Read full article →
Community Discussion
The comments express strong interest in understanding attention mechanisms, emphasizing that many machine‑learning concepts can be framed through kernel methods and recommending deeper resources such as Raschka’s book and archived tutorials. Readers commonly note difficulty grasping the “query‑key‑value” database‑lookup metaphor, preferring a linear‑algebraic view that treats attention as a quadratic transformation linking all tokens. There is a shared preference for cross‑attention explanations over self‑attention, and a general consensus that thorough study, including multiple chapters or detailed texts, is needed for solid comprehension.
The virtual AmigaOS runtime (a.k.a. Wine for Amiga:)
Summary
The referenced GitHub page “amitools/docs/vamos.md” could not be displayed; the site returned a “You can’t perform that action at this time” error, indicating that the content is unavailable or access-restricted. No additional information from the file is provided.
Read full article →
Community Discussion
The comment reveals a misunderstanding of the project’s purpose, interpreting “Wine for Amiga” as the ability to run Wine on Amiga hardware rather than the intended function of allowing Amiga applications to execute on a PC. The tone is inquisitive and seeks clarification, without expressing a strong positive or negative judgment about the software itself.