No, you don't understand. The people at my company are auto-opt-in premium-communication value-add customer-relationship-establishment specialists. But otherwise, I agree with you: everyone else is a spammer.
It honestly is a bit dissapointing that most of the internet's "infrastructure" is tied up in large corporations that just get money for free by being the only provider and face little to no backlash (because of their monopoly) when they neglect things like basic customer service.
Increasingly of the opinion that "free service with no support that's structurally essential for an economy" is some kind of trap. Possibly just the most comfortable kind of trap, a local optimum from which it's difficult to escape.
This is starting to become important as countries (very unwisely!) start tying things like national ID and banking to smartphones.
I don't know if it's that simple. As a litmus test, try to set up your own mail server. See how many milliseconds it takes for it to be blacklisted by gmail. And then observe the response time for their support, when you try to clear up the confusion that google has about your intentions.
I find there are three peopls who comment about hosting email. A small group like us who set it up correctly and never have problems. A larger group who set it up but get the dns wrong and warn people not to. And a third bigger group who never tried but listen to the second group and always comment that you'll have 1% deliverability
It was dead-nuts simple in the 1990s: Just learn enough about DNS to put in an MX record that points to an A record, get sendmail working, and have it begin delivering mail. The end. (Open relay? No spam filter? No virus scanning? No nothin'? Yeah, that kind of was the style at the time...)
It's got a lot more steps today, but it's still do-able. Operationally, keeping a mail server online and treated well just takes one or two people to spend a little bit of time occasionally to stay proactively ahead of new expectations and requirements instead of reacting to them after things change.
It also helps if Carla, from marketing, doesn't wake up one day and decide to spam the entire customer list without asking for guidance first. Maybe I should have put some automatic mitigation into place for that, but whatever: We chatted about that and it never happened again.
(Or at least, I find that to be true with smaller companies. Bigger ones obviously may require more elaborate systems to handle more volume and/or provide better uptime. But the requirements of keeping the reputation up are about the same regardless of scale, and that still only takes one or two people to pay attention to things sometimes. [And the only reason two might be required is in case one of them gets hit by a bus.])
"Blacklisted" probably doesn't have a sufficiently clear definition. I don't even run my own server, just use a custom family domain that is served by protonmail, and discovered when trying to go through foster licensing that virtually all of the agencies were not reading my e-mails because Microsoft and Google alike were routing them into the spam folder, but they weren't being blocked or bounced. I wouldn't have even known if I hadn't called a few and asked them to check.
I am definitely not being flagged for any actual spam-like behavior. I might send out 40 e-mails a year, and even though it's a "family" domain, I'm the only one who has ever used it, ironically enough, as part of my decade-old effort to de-Google.
I've built mail servers before Gmail existed that lasted long enough to get blacklisted by Gmail.
Fixing it was always pretty simple -- or at least, non-mysterious. They'd bounce some things, I'd look at the headers of the bounced messages, and therein were links to instructions there that showed how to resolve whatever issue it was this year.
Just follow the steps, implement the new thing, and stuff started flowing again in rather short order. Not so bad.
IIRC, the only time it ever cost us any money was when the RBLs started keeping track of dynamic IP pools and we needed to finally shift over to something actually-static.
Maybe it's only legacy, but gmail brings customers to Google and their related services. Escalation then brings them on as paying Customers. As loss leader may make a loss if looked at in a bubble, but if looked at as part of the "Customer Lifecycle" then other areas of profit would likely be much smaller without the free gateway.
It takes me active resistance to avoid Google's paid services, and I'm staunchly independent in relatively rare air. The minor capitulation required to turn into a paying Customer would capture a good percentage of their erstwhile-free gmail users (I would think. Yes, conjecture, interested in explanations of alternative theories).
> How much customer support resources should someone reasonably expect
Zero. OTOH, since I'm sure they are training on emails and archiving/profiling everything forever even if we delete messages.. those constant threats to become a paying customer before hitting some arbitrary small quota are still villainous
We might not be paying money, but we don't know what happens to our private data.
Maybe it's not used at all, maybe used just internally, maybe could be even sold.
Data of millions of users is very very valuable, even just thinking about how much targeted adverts could be placed with it.
That's helpful data, thank you. Sounds like it may depend on the service. (I'm genuinely shocked to see that many hotmail addresses, and can't help but wonder if there are correlations with other factors.)
Most people use Gmail because they want to, not because they have to. It's a free, superior product. Pretending voluntary preference is a monopoly is nonsense, but it is a very Mastodon-brained take.
One way monopolies form is by giving away something that others would have to charge money for.
Another way monopolies form is via exclusionary practices and the resulting impression that "things that aren't gmail are less reliable". (Anti-spam does not have to be exclusionary, and anti-spam is generally a good thing, but when it reliably sends smaller providers' mail to spam based solely on them being smaller providers, it is.)
Another way monopolies form is via social effects. "What's your gmail?", or people on first-tier technical support hearing you say an email address and assuming it's a gmail address and having to be corrected, and having never encountered one of those before.
Assuming any of those are "voluntary preference" is a take.
It's a figure of speech. I am not saying it is literally free. I'm being facitious. What I mean is they get money overwhelmingly because of their position in advertising and through android that essentially allows them to never worry about losing users. Who is going to going to attempt to delete their google account over poor customer service? You literally cannot access half of the internet today without a Google account.
Try running your own SMTP server for a while. Gmail holds what appears to be monopoly power and uses it quite readily. Even ISPs with "free" customer email addresses aren't nearly as onerous as google is.
There is a common misapprehension that the term "monopoly" can only be used when there a single supplier.
Quoting https://en.wikipedia.org/wiki/Monopoly : "In law, a monopoly is a business entity that has significant market power, that is, the power to charge overly high prices, which is associated with unfair price raises."
Or from Milton Freedman, "Monopoly exists when a specific individual or enterprise has sufficient control over a particular product or service to determine significantly the terms on which other individuals shall have access to it". https://archive.org/details/capitalismfreedo0000frie/page/12...
In the post-Borkian interpretation of monopoly, adored by the rich and powerful because it enables market concentration which would otherwise be forbidden, consumer price is the main measure of control, hence free services can never be a monopoly.
Scholars have long pointed out Bork's view results from a flawed analysis of the intent of the Sherman Antitrust act. For example, Sherman wrote "If we would not submit to an emperor, we should not submit to an autocrat of trade, with power to prevent competition and to fix the price of any commodity.” (Emphasis mine. Widely quoted, original transcript at p2457 of https://www.congress.gov/bound-congressional-record/1890/03/... ). Freedman makes a similar point (see above) that a negative effect of a monopoly is to reduce access to alternatives.
In it she quotes Robert Pitofsky in "The Political Content of Antitrust":
"A third and overriding political concern is that if the free-market sector of the economy is allowed to develop under antitrust rules that are blind to all but economic concerns, the likely result will be an economy so dominated by a few corporate giants that it will be impossible for the state not to play a more intrusive role in economic affairs"
Even if you support the Borkian interpretation, you should still worry about the temptation for the US government to "play a more intrusive role" with GMail accounts. I strongly doubt Google will follow Lavabit's lead and shut down email should the feds come by with a gag order to turn over the company's private keys.
They aren't a monopoly, and especially not a monopoly on emails.
How did we get to the point where there can be 12 services, but the one with lots of customers is a "Monopoly". Its a complete destruction of the word. They aren't killing their competitors, nor making it illegal to compete. Yeah its harder in the current era to run your own mail server, for a variety of reasons involving spam. But can we just cut the shit on calling literally every company with more than 100 employees a Monopoly?
Most of the problems people have spinning up their own email servers, like getting blacklisted by the big boys, are less bad societally than actually accepting and routing the quantity of spam they are blacklisting. Does it benefit them? Kind of. But its not anticompetitive in any real sense. These restrictions are obvious and basic. If you really wanted to, you could spend a significant, but in the grand scheme of things small, amount of money to break into the same game.
I mean theres a non zero chance that if Google, Microsoft and Amazon stopped being so damn picky, the government would turn around and regulate that they do exactly what they are doing now, to resist the plague of spam that would result.
Its like getting mad at Visa and Mastercard for insisting on the PCI DSS for people they transact with. If it wasn't mandated by Visa and Mastercard, it would become government regulation (and is already referenced by regulators in some jurisdictions)
"Ooooh no Visa is being anticompetitive making me secure my environment and prove that security to a trusted third party what a terrible monopoly they have".
The point is that they don't provide the level of services required by their position, which is dominant.
When you have a legitimate problem with Google, they don't reply to you. The news here is again an example of that. The only thing you can do is abide by their rules, which often requires you to subscribe to their services or be at their mercy.
Thats the point? The point seems to dance around and shift every time I address it.
I have had this specific issue with an absolute laundry list of email providers and senders, including Google. Googles probably not even in the top ten worst offenders. Getting Sony to remove an ip from its PSN email blacklist was much more difficult.
So they are a monopoly in the sense that they aren't a monopoly, and just have massive corporate power, and that massive corporate power translates into them acting like every other email provider with a spam blacklist and that's uniquely bad somehow? Is that a good description?
Or will the point now shapeshift into something else?
Are you sure it's the point itself shapeshifting and not your responses to it?
> have massive corporate power, and that massive corporate power translates into them acting like every other [massive corporate] email provider with a spam blacklist
If that's how you want to sum it up, sure. Unaccountable corporate power is bad. That people instinctively reach for the "M-word" in response to this dynamic doesn't invalidate their criticisms. And no, I don't find your "if corpos didn't do this on their own then the government would force it" argument compelling. The problem isn't spam filtering (etc), but rather the details of how they're implemented.
No, they got it by Gmail being a loss leader paid by Google AdSense in the search engine. Now they have AdSense in Gmail directly, so I guess it pays for itself.
AT&T was once broken up and then after that you could connect a modem to a phone line. The whole public use of the Internet is a consequence of breaking up a “superior product” that became a bloated market incumbent resting on its laurels.
No, we should be mad at Google or any other BigTech taking over a big enough chunk of a federated system to basically dictate what can be sent/received and what not. With no human in the loop if you don't agree with their decisions.
I don't mean to shit on their interesting result, but exp or ln are not really that elementary themselves... it's still an interesting result, but there's a reason that all approximations are done using series of polynomials (taylor expansion).
In numerical analysis, elementary function membership, like special function membership, is ambiguous. In many circumstances, it’s entirely reasonable to describe the natural logarithm as a special function.
Sorry, re-reading this, I should have said "most". As the other reply mentions, Pade approx. are also well liked for numerical methods.
I personally mostly do my everyday work using taylor expansion (mostly explicit numerical methods in comp. EM because they're cheaper these days and it's simpler to write down) so it's what first comes to mind.
A quick meta-take here: it is hard to assess the level of expertise here on HN. Some might be just tangentially interested, other might have degrees in the specific topic. Others might maintain a scientific computing library. Domains vary too: embedded systems, robotics, spacecraft navigation, materials modeling, or physics simulation. Until/unless people step up and fill the gaps somehow, we have little notion of identity nor credentialing, for better and for worse.*
So it really helps when people explain (1) their context** and (2) their reasoning. Communicating well is harder than people think. Many comments are read by hundreds or more (thousands?) of people, most of whom probably have no idea who we are, what we know, or what we do with our brains on a regular basis. It is generous and considerate to other people to slow down and really explain where we're coming from.
So, when I read "most people use Taylor approximations"...
1. my first question is "on what basis can someone say this?"
2. for what domains might this somewhat true? False?
3. but the bigger problem is that claims like the above don't teach. i.e. When do Taylor series methods fall short? Why? When are the other approaches more useful?
Here's my quick take... Taylor expansions tends to work well when you are close to the expansion point and the function is analytic. Taylor expansions work less well when these assumptions don't hold. More broadly they don't tend to give uniform accuracy across a range. So Taylor approximations are usually only local. Other methods (Padé, minimax, etc) are worth reaching for when other constraints matter.
* I think this is a huge area we're going to need to work on in the age where anyone can sound like an expert.
** In the case above, does "comp. EM" mean "computational electromagnetics" or something else? The paper talks about "EML" so it makes me wonder if "EM" is a typo. All of these ambiguities add up and make it hard for people to understand each other.
I do computational electromagnetism, specifically plasma simulation. In the field solver and particle pushers (I mainly do explicit codes meaning we just approximate the derivatives numerically) we only do taylor expansion so that the derivatives are essentially second order accurate. We don't bother going further, although I can, because in my domain, being more "accurate" as a function of step size (dx in approximating f(x)->f(x+dx)) yields less of a profit vs just decreasing step sizes and grid sizes (or increasing resolution), and even then, the numerical accuracy pales in comparison to say setting up the physical problem wrong (the focus of a simulated laser pulse being ten wavelengths out of focus).
Replying to some of your questions (1 and 2), this is from the perspective of a computational scientist, and a less theoretical type who works closely with experimentalists. This I am closer to a user of codes to model experiments than I am to someone who does a lot of analytic or fundamental theory, although my experience and perspective is probably close to others who are computational-ish in other domains like engineering, for the reasons I'll explain below.
For 3, most physically useful simulations that are not merely theoretical exercises (that is, simulations that are more predictive or explanative of actual experiments scientists want to do) will not consist of analytic functions you can write down. First, say that I suppose initial conditions in a problem has an aspect that is analytic (me setting my laser profile as a gaussian pulse), once the interaction with a plasma target occurs, the result you obtain (and thus predictions a simulation will make that can be compared to experiment) will not be gaussian but will evolve due to the complex physics modeled in the simulation. And a gaussian as an initial condition is already an approximation to an actual experiment. An "easy sim" for me is doing a best fit to the waist from a profile they'll read from a power meter, and using a gaussian that closely matches that, while a more realistic simulation would be me directly taking that data they have on an excel sheet and feeding that into the simulation directly as an initial condition. In most real world scenarios, most ICs already aren't analytic and must be solved numerically. By the way, this isn't that different for how engineers use computational codes too. Not many airplane wings are spheres or cylinders, you'd likely have to import the design for a wing from a CAD file into say an aerodynamics fluid code.
So in all these cases, the bottleneck isn't really approximating analytic functions you can write down either in closed form or even in series form as to the nth degree. Many people in the computational domain do not need accuracy beyond two or three terms in a taylor series. This is because it is usually easier to just cut down dx and do more steps in total rather than using a large dx and requiring more terms...and this before using any more sophisticated approximations. No code I know uses Pade approximations, I just know that some libraries for special functions (that may be one or two function calls exist in a code I use) use them.
Also, just a quick example you can try. Let's look at exp, for small argument (this only really works for small argument, you obviously can't do taylor expansion well for large argument). Consider the following:
>>> np.exp(0.4231)
np.float64(1.5266869570289792)
I will see how many terms I need to get 4 digits of accuracy (note that I had four digits in my input, so even ignoring sig figs, I probably shouldn't expect better than 4 digits for a result, note that numpy itself is numerical too so shouldn't be considered exact although i'll trust the first four digits here too)
>>> x = 0.4231
>>> 1
1
>>> 1 + x
1.4231
>>> 1 + x + x**2/2
1.512606805
>>> 1 + x + x**2/2 + x**3/(3*2)
1.5252302480651667
>>> 1 + x + x**2/2 + x**3/(3*2) + x**4/24
1.5265654927553847
Note that by 3 terms (x**3) I'm already off in the last digit by 1, by 4 terms, it's converged enough. Given a fp register, you can reuse powers of x from the last term, this is already dangerously cheap, why do you even need better than this in a practical sense? Most results I see in the wild do not even reach requiring this level of precision in a single simulation. I am a scientist however, it's possible engineers need more precision.
For us, more sophisticated methods on the "calculating transcendental functions" level are not really required, which is why they don't appear in codes I usually see. What we need better are things that make the actual elemental operations, like fma and the like, faster. Things like avx512 are far more interesting to me, for example.
If I'm not mistaken, they were told the point of the experiment was supposed to be about "memory and learning". If a teacher was doing a "commission" as they put it, they aren't really following the purpose of the experiment any longer.
Context is important. Maybe that was told in the first 3 minutes of the briefing, and them came 30 minutes about the shocks. I would not assume the briefing was so thorough.
I do feel like the conclusion is a bit of a stretch, but there is a slight discrepancy where disobedient participants followed the rules more than the obedient ones, which is an interesting observation. It just feels a bit weak.
I think that is misinformation caused by circular logic. DDR prices stopped risking, simply because supply reached equilibrium vs demand and willingness of customers to overpay. The Micron stock price also had minor correction.
Suddenly internet is full of articles how it is all caused by TurboQuant release or OpenAI giving up on its huge wafer orders.
Looks very similar like attempts to explain random crypto price changes with any (un)related news.
The term I would use is "corner", as in "silver" and "onions". But there's a couple of distinctions:
- supposedly buying for their own use, rather than reselling
- bought as forward, rather than spot: much of what they've ""bought"" is a commitment to buy memory that has not yet been manufactured
> Will half the memory industry run into the ground because of the oversupply means their current production is unsellable?
They've seen that coming, this is why there isn't a massive expansion to meet the demand rise and instead they're letting "demand destruction" happen. A decision vindicated by the war, as well.
> supposedly buying for their own use, rather than reselling?
Do we know what they're using it for? I mean not reselling would imply the chips go on some OpenAI specific proprietary hardware directly, rather than it being sold back to OEMs to buy more GPUs or other off the shelf accelerators.
> They've seen that coming, this is why there isn't a massive expansion to meet the demand rise and instead they're letting "demand destruction" happen. A decision vindicated by the war, as well.
If you're a memory company, this sounds like making the best of a bad situation. not making more stuff despite demand far outstripping supply, just to prepare for the potential oversupply your customer can cause because they can walk back on their massive order.
This is why "sort by controversial" is such a good feature of Reddit. If you're not offending half the people, is it really worth saying anything at all?
reply