I think it’s binary. You’re either part of the “growing my personal brand”, self-aggrandising b**shit crowd or you’re not. If you’re in it for yourself, it’s all about your posts and your comments on other people’s, so that’s fine. That’s the ‘social network’ side of things.
There’s still a small residual function related to maintaining an online CV and supporting messaging between businesses, recruiters and individuals, but this is distinct from the ‘social’ feed.
Similar observation: sometimes when we get off the couch, on which we have a blanket made from artificial fibres, it causes our TV to go black for a couple of seconds. The TV is wall mounted and a metre from the end of the couch, and about 3.5’ from where we’re sitting.
I suspect a possible future of local models is extreme specialisation - you load a Python-expert model for Python coding, do your shopping with a model focused just on this task, have a model specialised in speech-to-text plus automation to run your smart home, and so on. This makes sense: running a huge model for a task that only uses a small fraction of its ability is wasteful, and home hardware especially isn't suited to this wastefulness. I'd rather have multiple models with a deep narrow ability in particular areas, than a general wide shallow uncertain ability.
Anyway, is it possible that this may be what lies behind Gemma 4's "censoring"? As in, Google took a deliberate choice to focus its training on certain domains, and incorporated the censor to prevent it answering about topics it hasn't been trained on?
Or maybe they're just being sensibly cautious: asking even the top models for critical health advice is risky; asking a 32B model probably orders of magnitude moreso.
> I suspect a possible future of local models is extreme specialisation - you load a Python-expert model for Python coding, do your shopping with a model focused just on this task, have a model specialised in speech-to-text plus automation to run your smart home, and so on.
I'd find this very surprising, since a lot of cognitive skills are general. At least on the scale of "being trained on a lot of non-Python code improves a model's capabilities in Python", but maybe even "being trained on a lot of unrelated tasks that require perseverance improves a model's capabilities in agentic coding".
For this reason there are currently very few specialist models - training on specialized datasets just doesn't work all that well. For example, there are the tiny Jetbrains Mellum models meant for in-editor autocomplete, but even those are AFAIK merely fine-tuned on specific languages, while their pretraining dataset is mixed-language.
> is it possible that this may be what lies behind Gemma 4's "censoring"
Your explanation would make sense if various other rare domains were also censored, but they aren't, so it doesn't.
> asking even the top models for critical health advice is risky
Not asking, and living in ignorance, is riskier. For high-stakes questions, of course I'd want references that only an online model like ChatGPT or Gemini, etc. would be able to find. If I am asking a local model for health advice, odds are that it is because I am traveling and am temporarily offline, or am preparing off-grid infrastructure. In both cases I definitely require a best-effort answer. I also require the model to be able to tell when it doesn't know the answer.
If you would, ignore health advice for a moment, and switch to electrical advice. Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire. Why is electrical advice not censored, and what makes it not be high-stakes!? The logic is the same.
For the record, various open-source Asian models do not have any such problem, so I would rather use them.
> Not asking, and living in ignorance, is riskier. For high-stakes questions, of course I'd want references that only an online model like ChatGPT or Gemini, etc. would be able to find. If I am asking a local model for health advice, odds are that it is because I am traveling and am temporarily offline, or am preparing off-grid infrastructure. In both cases I definitely require a best-effort answer. I also require the model to be able to tell when it doesn't know the answer.
If I was prepping, I’d want e.g. Wikipedia available offline and default to human-assisted decision-making, and definitely not rely on a 31B parameter model.
To be reductive, the ‘brain’ of any of these models is essentially a compression blob in an incomprehensible format. The bigger the delta between the input and the output model size, the lossier the compression must be.
It therefore follows (for me at least) that there’s a correlation between the risk of the question and the size of model I’d trust to answer it. And health questions are arguably some of the most sensitive - lots of input data required for a full understanding, vs. big downsides of inaccurate advice.
> If you would, ignore health advice for a moment, and switch to electrical advice. Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire. Why is electrical advice not censored, and what makes it not be high-stakes!? The logic is the same.
You’re correct that it’s possible to find other risky areas that might not be currently censored. Maybe this is deliberate (maybe the input data needed for expertise in electrical engineering is smaller?) or maybe this is just an evolving area and human health questions are an obvious first area to address?
Either way, I’m not trusting a small model with detailed health questions, detailed electrical questions, or the best way to fold a parachute for base jumping. :)
(Although, if in the future there’s a Gemma-5-Health 32B and a Gemma-5-Electricity 32B, and so on, then maybe this will change.)
> Imagine I am putting together electrical infrastructure, and the model gives me bad advice, risking electrocution and/or a serious fire
That's a weird demand from models. What next, "Imagine I'm doing brain surgery and the model gives me bad advice", "Imagine I'm a judge delivering a sentencing and the model gives me bad advice", ...
Requesting electrical advice is not a weird ask at all. If writing sophisticated code requires skill, then so does electrical work, and one doesn't require more or less skill than the other. I would expect that the top-ranked thinking models are wholly capable of offering correct advice on the topic. The issues arise more from the user's inability to input all applicable context which can affect the decision and output. All else being equal, bad electrical work is 10x more likely to be a result of not adequately consulting AI than from consulting AI.
Secondly, the primary point was about censorship, not accuracy, so let's not get distracted.
Inequality was growing hugely (and still is) before the recent advent of LLMs.
Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?
As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.
So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.
Removing a white collar job from the economy puts a worker into the bottom tier _and_ reduces the wages of that bottom tier.
We are speeding towards a servant class. Uber was the first wave. Now it’s more mundane things like getting groceries. I doubt it will be long before we rip off the band aid and make full time servants more popular.
You're right, and I think we're slightly at cross purposes. I'm not disagreeing that AI will drive some major societal changes as you outline.
My point is that the current narrative of "AI will take our jobs" is too simplistic, and that it might even be a smokescreen against the rising inequality that is already fueling anger across the world and which is totally unrelated to AI. If you're struggling to pay your bills today, that's not AI's fault - it's years of bad politics and politicians, geopolitics, hyper-capitalism, supply-chain issues, inflation, and so on.
In the future, if/when AI decimates parts of the middle class and they've had a chance to retrain, there will likely be a second-order impact on today's skilled manual workers. But that's years off, and not something I've seen discussed in detail in the mainstream.
I guess I just feel like your appeal to skilled manual workers is pointless. They’re not really the focal point. It’s the large masses of people being relegated to the bin labeled “effectively unskilled”.
Getting dumped from "upwardly mobile middle class" to "unemployable underclass" does seem likely to be radicalizing . It's not clear yet how much it'll actually be happening, but it does challenge a lot of the traditional focus on blue collar workers as being the most up in arms about automation and labor.
Does it show your spoken words on the screen live (i.e. streaming) or does it wait until you’ve finished speaking?
I find it very helpful to see my words live - for some reason it helps my simple brain structure what I’m saying, and I’m much more fluent as a result.
I went on a mission a few weeks ago and tried every freely available MacOS STT app I could find (and there are lots of them) - but none I tried had this feature and was otherwise satisfactory. (I vibe-coded a PoC which could do this, so it’s definitely possible.)
So you are stating that there has been no change in how clinical trials are required to be run, and the associated costs, since the changes immediately following the thalidomide catastrophe?
It seems to be more complicated (or unpredictable?) than that:
> In this study, rapid weight loss was associated with the loss of kidney function in males with normal weight, and with improvement of kidney function in overweight males.
> Our study showed that BMI and BMI change were not associated with eGFR change in females.
Indeed, but that’s not the point: many anti-vaxxers are against all vaccines, irrespective of how they were tested. (And will argue against e.g. the FRA approvals.)
Okay; noting that the argument has moved from "untested" to "relatively untested".
To clarify, is your concern the inadequacy of the approval process FDA uses for (all) vaccines (noting that many vaccines --e.g. influenza-- are refreshed on a fairly regular basis to account for new strains of viruses) or something specific to approval of the MRNA vaccines?
Or is it that MRNA vaccines were a new approach for vaccines more generally, and so there wasn't/isn't the same long-term data that there was/is for multiple generations of vaccines based on older technologies (viral vector, toxoid, etc.)?
I disagree; "untested" is a very definitive statement. Not tested. Especially when it's in a thread discussing people using all manner of less tested or sometimes literally untested peptides. (Hence my initial thought that maybe you were aware of people taking a DIY route that I wasn't.)
Anyway, when discussing a subject so popularly controversial as vaccines, it's probably better to be precise.
There’s still a small residual function related to maintaining an online CV and supporting messaging between businesses, recruiters and individuals, but this is distinct from the ‘social’ feed.
reply