Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100%

Writing such an article without mentioning nuclear power is a sign of dishonesty.

Wind and solar can't live alone, since they only operate when nature wants it. Perfect match for hydro, but we don't all live in the Hymalaya. Most (e.g. Germany) burn gas and coal to supplement.

Nuclear is the only tech suited for decarbonation, and once you have it, you don't need solar and power because 95% of the cost is in the construction. Since you'll build it to sustain peak demand, wind or solar are just extra costs.


Among the "scads of utter online shit" is the submitted website for which it mostly agrees with. It also agrees with most of the comments here.

But the thing is, I used a LLM because I want to see the "scads of utter online shit". The "greatest intellectual achievement" is an opinion reflecting what people think matters most, not an empirical fact. And what I want is something approaching a global consensus, not what the HN bubble thinks matters. And for that, I think LLMs have value.

And anyways, what LLMs say generally match what people are saying here, so unless you are implying that we are all talking shit, I don't really see the problem.


I think you underestimate the amount of knowledge needed to deal with the complexities of language in general as opposed to specific applications. We had algorithms to do complex mathematical reasoning before we had LLMs, the drawback being that they require input in restricted formal languages. Removing that restriction is what LLMs brought to the table.

Once the difficult problem of figuring out what the input is supposed to mean was somewhat solved, bolting on reasoning was easy in comparison. It basically fell out with just a bit of prompting, "let's think step by step."

If you want to remove that knowledge to shrink the model, we're back to contorting our input into a restricted language to get the output we want, i.e. programming.


ComputerPoker.ai is a website where users can play simulated poker tournaments against GTO Bots to learn GTO poker strategy in a fun and low-risk environment.

My motivation for creating CompterPoker.ai was feeling a bit overwhelmed by some of the professional poker tools out there for learning GTO play. For some tools, learning how to simply operate the tool itself felt like a second job. With ComputerPoker.ai players can play against bots themselves simulating GTO play to learn what it "feels like" to play GTO vs. GTO opponents without having to turn any knobs or dials (feedback is real-time as you play).

The Beta tester code for HN Users is: HackerNews2026. All feedback is welcome! Please send suggestions for improvement or bugs to contact@computerpoker.ai or alternatively leave a comment below. Any questions I will do my best to answer.

As for the product offering the website is designed to teach players how to play optimal poker strategy (GTO) in simulated Texas Hold 'Em poker tournaments. Our value proposition is that if you can consistently beat the bots then you will fare well in live poker tournaments (of course adjusting for your opponents' play).

In addition to GTO pre-flop quizzes and pre-flop charts, users have the ability to simulate poker tournaments from start-to-finish and get feedback on their decisions _in real-time_ in a fun and low-risk environment.

For those interested the tech stack is Django deployed on AWS via Terraform and SaltStack, the database uses a Postgres RDS backend, and the frontend uses HTMX with WebSockets via Django Channels and Redis (Nginx serving as reverse proxy with CloudFlare DNS and SSL). During the project I used Claude Code to aid with various boilerplate aspects of the code base including building out the repos for Terraform and SaltSack and of course speeding up Django development.

Users are graded pre-flop based on the covered pre-flop scenarios (two-ways only for now). Post-flop users are graded on a residual MLP PyTorch model. We have built an in-house solver in Rust using the discontented CFR++ algorithm. The PyTorch model approximates GTO play post-flop (again only two-ways currently) based on training data with raises, EV, and realistic ranges for OOP and IP players. Because the post-flop decisions are based on a model that will always be a work in progress I refer to these decisions as GTOA (or "GTO Approximate").

Version 8 of the PyTorch model is the first one that I am happy with and actually find it quite difficult to play against. If you manage to beat the bots please do let me know how many tries it took! For those curious the PyTorch params for the most recent run are below (I trained on a gaming PC via Linux WSL2 using an AMD GPU).

The website is live in Beta mode as I gather feedback on how things are structured and work out any bugs/kinks. If you have any suggestions for improvements I’d love to hear them. Subscriptions are live so if anyone wanted to test the Stripe payment processing flow I certainly wouldn’t mind! ;-)

p.s. This is a side gig for me. I am currently looking for full-time work either fully remote or on-site based in London, UK (this LLC that runs ComputerPoker.ai operates out of USA but I am based full-time in the UK and authorized to work in both UK and USA). If you or someone you know is looking for a SRE with strong software engineering skills please let me know!


Java and Ruby were created in 1995.

Lua in 1993.

Python in 1991.

C in 1972.

Lisp in 1960.


At some point apps should just start pointing the finger at the cause of these problems. Linking users with Spanish IPs to a page explaining soccer internet censorship won't stop the bad reviews entirely but at least it'll be more useful than doing nothing.

Moving over to the management track and trying to not suck at it.

This is insane. I wanted this so, so much since the very first macOS that had Spaces. I am so grateful.

> Every ambient condition that you need to track adds mental load

Thus it's wise to limit the complexity of your code. If it starts getting difficult, it might be time to break it down in smaller, more understandable, pieces.


I'm currently developing a dapo star. Trying to get fabrics , designs and make a nice lil group where we can teach and learn tricks :)))

It's not software, but that's what I enjoy making right now


We're against chronological timelines composed of people you follow for being "group think" now?

Just in case it wasn't clear, what they described doesn't need extra tooling. You can write this in your CLI and it will easily cap a Max 20x plan in an hour: "we are converting this entire codebase from TS to C#. Following the guidelines I've written in MIGRATION.md, convert each file individually. Use up to 32 parallel subagents. Track your work for each file in a PROGRESS.md file, which you will update for each file starting and completing. Using an agent team, as a secondary step, add a verification layer where you verify each file individually for accurate migration following the instructions in VERIFICATION.md"

Yea there are other ways to do this, you can set up a separate harness sure to make it more efficient, but just the above will also work, it's just text you paste into your CC terminal, and it will absolutely cap the largest subscription plan available no problem.


We run everything through a custom wrapper that logs all shell invocations to a separate Vector pipeline before execution, helps with audit trails, but doesn't really solve the problem of "what if the model decides to rm -rf /". Are you planning any kind of capability-based sandboxing, or just hoping the model doesn't get weird with API credentials it has access to? fwiw that's the bigger risk in our setup.

How timely. I was just moving off Github to self-hosted docker registries.

Remember when Google added Car Crash Detection to Pixel in early 2020? Nobody does.

But when Apple added it in iPhone 14 (2022)...


What you will do: - Work closely with several professions as a single contributor: Act as a bridge between data scientists, software engineers and product teams, ensuring alignment on recommendation goals and priorities - Collaborate with stakeholders to understand product objectives, identify key performance indicators (KPIs), and translate them into measurable goals for recommendation performance - Collaborate and exchange with our core shop analytics team on cross-team initiatives and develop together cross-stack data deliverables - Develop and maintain tracking systems, dashboards, and tools to monitor recommendation system performance and user behavior - Continuously analyze data and update relevant dashboards to ensure that product managers have timely access to recommendation performance data - Analyze user behaviour, recommendation success metrics, and click-through data to improve recommendation relevance, usability and overall performance based on analysis and insights - Provide insights to Product Managers on user interaction with the recommendation function, highlighting areas where the user journey can be optimized for efficiency and satisfaction - Identify areas where data processes can be automated or streamlined to increase team efficiency

Who you are: - Advanced SQL (Python is a plus) - Experience working with tracking data (GA4 a plus) - Experience with version control software and data warehouse (gitlab, GBQ or similar) - Experience with data visualization tools (eg. Looker Studio or similar) - Experience with data mining, cleaning - Knowledge in user behavior analysis and visualization - Experience building interactive dashboards (e.g. using Streamlit, Voila, Gradio) - Able to work and drive topics independently - Excellent analytical thinking, a passion for working with numbers, and the ability to deliver thoughtful solutions based on data analyses - Excellent communication skills to convey complex analytical findings in a clear and actionable manner - Self-driven, motivated and organized

Nice to have: - Experience with BigQuery - Familiarity with statistical testing and Gaussian Processes - Knowledge of Computer Vision libraries, (e.g. OpenCV, TensorFlow, PyTorch) - Experience with GCP or AWS, including infrastructure-as-code and CI/CD pipelines - Practical knowledge of Docker

Benefits: - Hybrid working - Fresh fruit every day - Sports courses - Free access to code.talks - Exclusive employee discounts - Free drinks - Language courses - Laracast account for free - Company parties - Help in the relocation process - Mobility subsidy - State-of-the-art technology - Central Location - Flexible Working Hours - Company pension - Professional training - Dog-friendly office - AY Academy - Feedback Culture - Job Bikes

YOU ARE THE CORE OF ABOUT YOU.

We take responsibility for creating an inclusive and exceptional environment where all genders, nationalities and ethnicities feel welcomed and accepted exactly as they are. We believe that a diverse workforce essentially contributes to the ABOUT YOU culture. In order to maintain talent and diversity, we emphasize the care for physical health, mental health and overall well-being. Our values and work ethics essentially contribute to our brand mission: empower acceptance and shape an inclusive, fair and circular fashion culture.

We are looking forward to receiving your application – preferably via our online application portal! This way, we can ensure a faster process and for you it is very easy to upload your application documents. :-)


Agents everywhere!!

Do you like what I've done with the place?


Adding a scheduler to my hobby kernel with the goal of a full shell coming soon, and an inference engine from scratch in C++. It's been fun.

On the one hand, you have money and famous footballers. On the other hand, you have a bunch of nerds whining about the internet being broken. The average voter (and politician) is out watching the soccer match, and doesn't care about the internet.

Tell your developers to start logging the exception, not just a hard-coded error message.

You're right, there is plenty of space for features that require AI to work but that are undistinguishable from "classical" feature. Better autocompletion is a proven one for example.

They're not on my For You feed, but anytime I watch a video on Twitter and it automatically jumps to a next video, it's always something that would never be on my feed. These days, it's usually bodycam footage from the United States (which I don't live in).

Apple's accidental moat now is taking the rise of hardware prices due to AI eat into their margins and just expand the mac user base.

Password manager is the right first step. A few others worth adding:

  - A "death folder" document (encrypted, in the password manager) listing every
    account, what's in it, and what to do with it. Google/Apple both have inactive
    account managers built in — most people don't know they exist.

  - For photos: make sure at least one copy lives somewhere your wife already has
    independent access to, not just through your credentials.

  - For things that matter: write it down in plain language. Not just passwords —
    context. Why you kept something, what it means.

  On the AI side: I've been thinking about this in the context of agent memory
  persistence (building Cathedral — a memory layer for AI agents). The same problem
  exists: identity and context that only lives in one place, not transferable.
  Structured memory exports are something we're working toward for exactly this reason.

I wasn't strictly speaking about HNers. Using NordVPN and the likes is already done by slightly savvier users. Just look at where those products are advertised.

Spinning up and provision a VPS to act as a VPN exit node in some other country raises the bar 10x or more.


I mean the app itself, not really the landing page if that's what you're referring to?

I appreciate your kindness. While I’ve got you, did you know that the Benny Hill show started in 1955 and a good chunk of what aired from then to 1969 was lost? There are a lot of fans that don’t even realize that what is sometimes labeled as season 1 is season 15! Crazy stuff!

> The finding I did not expect: model quality matters more than token speed for agentic coding.

I'm really surprised how that was not obvious.

Also, instead of limiting context size to something like 32k, at the cost of ~halving token generation speed, you can offload MoE stuff to the CPU with --cpu-moe.


I've seen evidence that reading a trigger warning and then consuming the content might be worse than just consuming the content without a trigger warning.

But is there any good reason to doubt that trigger warnings can be helpful in the obvious way: someone sees the trigger warning and makes an informed decision to avoid the content?


Motorola chip was called 68000.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: