Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The key claim in the paper (https://arxiv.org/pdf/2204.12310.pdf) is that there are detectable changes in cosmic ray activity around 15 days before major seismic events. One posited mechanism is that stuff happens deep in the earth that affects the magnetic field first, and then kicks off some big earthquake. But they don't rule out loopier ideas (like sunspot activity driving changes in the Earth's dynamo that then kick off earthquakes).

One thing that troubles me in the paper is that the researchers appear to have gone looking for precursor patterns in an ad hoc way, with no physical theory in mind, just trying different binning techniques and delays until they got a signal. I'd love to hear the opinion of someone who knows this field on the soundness of this research.



One thing that troubles me in the paper is that the researchers appear to have gone looking for precursor patterns in an ad hoc way, with no physical theory in mind

That seems like an extremely good thing to me. Looking for patterns and then figuring out causality later is a great way to solve real-world problems. Of course you can be led astray if your filters are too open, but if your work is rigorous and you still come out with a 6-sigma correlation then congratulations, you found a signal. What does it mean? How does it work? Who cares, there's plenty of time to figure that out. But in the meantime you can hypothesize that the connection exists, monitor it for a year or two, and if the correlation holds up and turns out to have some predictive value, then a winner is you.


If you dredge through enough random data, you will always find a six sigma correlation (or five, or however many sigmas you want), kind of by definition. This is why experimenters like particle physicists or gravitational wave astronomers who have petabytes of raw data have to define their criteria in advance, instead of just going to town looking for patterns.

I agree with you that if this observation proves to have predictive power, then the way it was found doesn't matter. But right now we're at the less reliable "we looked at all conceivable retroactive combinations of stuff and found this pattern" stage of the process.


The abstract makes it clear this is part of searching for an earthquake global warning syndrome. So in terms of motivation why the link exists is secondary, if it can for warn significant earthquakes that is all that matters.

Besides, the first "scientists" looked at the sky and just stared until patterns appeared.

You can't condemn scientists for looking at data. Not everything can be magicked up from first principles in a vacuum.

No scientist would claim that falsifiable hypothesis and empirical validation aren't needed. But I don't think any scientist would criticise someone for looking for patterns on a whim and pointing it out to others.

A favourite example[0].

[0] https://en.wikipedia.org/wiki/Ulam_spiral#History


both are valid strategies, what i think is bad is when groups of people claim that broad sweeping searches are all bogus and that you must have a priori knowledge to be relevant.

there should be people dredging through petabytes of cern data like this, exploratory analysis can give you insight into things you didn't know to look for.

even if you find things that are bogus you figure out why and learn how to avoid it given the context of the data.

to say otherwise becomes a hindrance to progress. plus its not like we have any other way of predicting earthquakes, trying to read the magnetic field to predict events sounds really promising, i'd be willing to give it a shot even if it is bogus, that's just risk assessment.


It wasn't entirely clear to me, but seems they've taken care of the look-elsewhere effect[1]. If so, that's at least something, no?

[1]: https://en.wikipedia.org/wiki/Look-elsewhere_effect


This is literally the definition of p-hacking, and no, it is not a good thing.


Sincere question from a non-scientist who struggles with the idea of how to make use of existing data without accidentally P-hacking:

Is it still P-hacking if you stumble upon a correlation in the historical record (after stumbling around for a while), call it a hypothesis, and then stick with it long though to gather a statistically significant amount of _new_ data to support it?

More broadly, are there ways to "go on a fishing expedition" that are still scientifically valid?


As long as you get new data in a way that can falsify your hypothesis that's fine. If you bias your data collection to favor your hypothesis that's still cheating.


Inarguably. The folks arguing that it's p-hacking aren't taking the next step of treating the correlation as a hypothesis, testing it, and establishing causality.


Yeah, there are valid ways to use the data. Looking at the parent again perhaps I was misreading it; I was responding to the idea that you could find the correlation and just go straight from there.


>if the correlation holds up and turns out to have some predictive value


While it's not encouraged in publish-or-perish contexts, you know, because money, but it is also a non-zero event that some of the quite important things in science have come to bear because of precisely this.

What's the old saying, "The best saying in science is not 'Eureka!', but rather 'Huh, that's weird...'"?


Or there's a common trigger event, like, say, a large gravitational wave or a passing cloud of weakly interacting matter(dark matter?) which triggers both increased cosmic ray emissions along with earthquakes.

Not that I believe any of this, but it seems we're just pulling stuff out of our asses so I figured I'd give it a shot.


>gravity wave

LIGO, the gravity wave detecting interferometer, has arms 4 kilometers long. Along that distance, it can detect a change in length one ten thousandth the diameter of a proton. With an instrument that sensitive, it can just barely detect a handful of gravity waves a year, out of the thousands? that occur in the observable universe. Gravity waves are subtle things.


There are different kinds of gravitational waves, LIGO doesn't see them all.

Related: https://cplberry.com/2015/01/10/1408-0740/


> ground-based detectors like LIGO measure the higher frequencies, pulsar timing arrays measure the lower frequencies, and space-borne detectors like eLISA measure stuff in the middle.

Does this data correlate with weather events, earthquakes or cosmic rays?


Weather events and earthquakes - no.

Cosmic rays - Astrophysical events that generate gravitational waves can also generate electromagnetic and particle emissions. If a black hole crashes into a neutron star, all kinds of stuff is going to come out.


Also, gravitational* waves. Gravity waves are something else: https://en.wikipedia.org/wiki/Gravity_wave


I guess the question becomes if tectonic plates be more or less sensitive?

They are significantly impacted by tides for example.


One aspect of gravity waves is that other than being extremely weak, they cannot perform work. So they don't cause mechanical forces.


This is not true, and in fact there is a very famous thought experiment called the "sticky bead argument" that was pivotal in developing the consensus that gravitational waves are a real, physical effect and not just a "gauge" effect (an artifact of the coordinate system):

https://en.wikipedia.org/wiki/Sticky_bead_argument

On the other hand, I definitely agree that gravitational waves are almost certainly too weak to cause any tectonic effects.


I had heard that somewhere before, but it looks like I'm wrong


Oh really? I didn't know that. If they cannot perform work, how can we detect them?


The OP might be making a more subtle point, but my understanding is they absolutely can do work. We can measure the deceleration of binary neutron stars with a rate predicted by the emission of gravitational waves. Deceleration == work.

However, we can also measure things that don’t do work, for example, a static magnetic field does no work on a charged particle (it cannot change its kinetic energy) but look at the swirls in a cloud chamber and it’s clear as day if there’s a magnetic field there.


I'm wrong on the work thing, but I don't think we can always regard deceleration as work, since that only works for Newtonian mechanics. Under GR there is an extra term in the equation allowing for acceleration without work.


Wait, really? Why not? That's fascinating

Edit: ah, you mean for any practical purposes. Duh.


It's a tangent, but gravimeters[1] are really impressive feats of engineering.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5713048/


we should definitely just reverse the shield polarity and be done by the third act. hopefully the robot can learn a lesson about humanity before the end credits.


I’m afraid only the power of love and empathy and togetherness can resolve this climax


Discovery will probably be the only Star Trek show that I never finish watching. So far season 1 of Strange New Worlds has been pretty good. (And each season of Picard was way better than the previous.)


Same. The first season was enough for me. Never again.


> looking for precursor patterns in an ad hoc way, with no physical theory in mind

A bit of skepticism is healthy but some of these threads are approaching downright dismissive.

There's a long history of research in this area to be aware of, though I know the paper in question doesn't elaborate on it.

Here's one physical theory for you, straight from NASA [1].

> In the ionosphere, the solar winds generate electrical currents. On the Earth surface, these currents cause magnetic field fluctuations. These fluctuations, penetrating the Earth interior, induce the electrical currents J, and, in the presence of the Earth magnetic field B, generate electromagnetic force, known as Lorentz force F = J × B. To study the relation of earthquakes and the Lorentz force, acting at the near onset times of strong earthquakes, we examine the Kp index, a logarithmic measure of the magnetic field deviation. The time varying Kp index gives us J, which in turn determines F.

> The Lorentz force tilts the subtle force balance in the earth crust towards triggering the release of stress strain energy, initiating an earthquake in a similar way as a mountain climber’s step can trigger the avalanches. The internal dynamics, however, are highly statistical.

> We find that the distinctive patterns of the Kp surges often strongly correlate to the onset of earthquake. This correlation depends on the seismic regions and the magnitudes of earthquakes. The stronger the earthquake is, more closely the Kp surge is associated. The statistical significance of nearly 100% is obtained for the Kp variations, synchronizing with more earthquakes in the Pacific Rim region.

[1] Geomagnetic Kp Index and Earthquakes (2018) https://www.scirp.org/journal/paperinformation.aspx?paperid=...


There have also been studies of cosmic rays inducing volcanic activity, as this study from Japan indicates:

https://www.researchgate.net/publication/234022172_Explosive...

"The strong negative correlation observed between the timing of silica-rich eruptions and solar activity can be explained by variations in cosmic-ray flux arising from solar modulation. Because silica-rich magma has relatively high surface tension (similar to 0.1 Nm(-1)), the homogeneous nucleation rate is so low that such magma exists in a highly supersaturated state without considerable exsolution, even when located relatively close to the surface, within the penetration range of cosmic-ray muons (1-10 GeV)."


It's fine to form a hypothesis from old data ... but it's not a theory yet until that hypothesis is challenged by (and isn't falsified by) future data :-)


This is doubly true when we're talking about predicting earthquakes, a practice that has had no successes and a LOT of notable failures.


Does old data discovered in the future count?


Yes it does. Because you didn't use it to overfit your model.

Same goes for data that you had in the last but decided to ignore while building the model.

In both cases there is a legitimate risk of cheating.

The only data that you cannot cheat about is future data.


This is good research.

>One thing that troubles me in the paper is that the researchers appear to have gone looking for precursor patterns in an ad hoc way, with no physical theory in mind, just trying different binning techniques and delays until they got a signal.

There is nothing wrong with this. In fact this is how most science is done. This is pure experiment - try things and see what comes up.

You're conflating this step with step three of the general way things have traditionally been done in physics:

1. An experiment shows a previously unexplained phenomena.

2. A theory is made to explain the results and predict the results of a future experiment.

3. A future experiment is undertaken with this theory in mind, to see if it has predictive power. If the predictions are correct, it is a good theory.

Your comment is referring to step three. The experiment in the paper is step one.


While I don't disagree with your description, there is an awful lot of scientific output which is really just fishing for significance (IE, runnings lots of tests without corrections), publication, claim credit for discovering something, and it never actually gets followed-up on to see if the claim generalizes.

https://en.wikipedia.org/wiki/Data_dredging

I think probably the most important thing is to get scientists to spend more time identifying and teasing out correlated variables to identify plausible mechanisms.


This problem is mostly a problem of social sciences. In physics it is important to have a theoretic explanation, if there is no explanation then physicists become excited and start to dig really hard. They value theoretic explanations not correlations. In contrast social sciences lack a good theory, they substitute quality of a theory with a quantity of theories. So it is even important correlations often impossible to explain without resorting to ad hoc theories.

You can see in this article that authors already suggest a theoretic explanation, and I do not doubt that we'll see follow up studies trying to clarify situation.


it's quite common in medical research, even highly quantitative work. And I've seen it in every field I've worked in, which spans biology, physics, chemistry, typically with a quantitative bent.

I once had an advisor edit my draft over night and submit it as a paper with a bunch of juiced up numbers that weren't true, but made sense to the advisor even if the underlying scripts I ran didn't support it. I complained to them and the paper was withdrawn before publication, and immediately left their group. this was in quantitative biology- hard core bioinformatics with very sophisticated modelling.

But yeah, real experimental physics is hard to fake since reproduction is usually more straightforward than in other fields.


> I once had an advisor edit my draft over night and submit it as a paper with a bunch of juiced up numbers that weren't true,

I'm stating the obvious here, but that is not a good advisor in any sense. It must have been difficult to leave, but it would be the only reasonable response.


Well, if I'd wanted a career in science and didn't have ethics, then they would have been a good advisor because they knew exactly how to ride their wave of falsehood to a professorship at Berkeley.

It wasn't hard to leave, I just contacted another professor at berkeley and joined their lab the next day. The new advisor, while fairly dull, was methodic and pedantic and the idea of faking or juicing results would probably never have occured to him.

In short, in science if you're not a super-genius, it can be hard to compete with the super-geniuses and the cheaters. I found it easier to move to computer engineering than stay in science.


So, you're basically telling me that there is (or was) a bioinformatics prof @ Berkeley who was fucking with the data.

Yeeeesh.

I guess my science career was relatively clean. I knew a few fellow students who got screwed over by their advisors in the sense that the advisors demanded an excessive amount of publishable work to graduate.

And I saw plenty of personality conflicts, many of which could be lain squarely in the lap of the advisor.

But I never saw or heard of outright fraud, which makes me happy.

I'm not naïve. I know fraud is everywhere. And I know there's a lot of pressure to produce interesting results. I probably just got lucky.

edit: for anyone taking the plunge into grad school. I made my choice of advisor largely based on his reputation of looking out for his students ... and on his research as a secondary consideration. That may have helped me.


If I would to make a comprehensive list of everything I've seen it would be depressing.

When I first got to grad school I immediately went to a group that had published a paper "solving parts of protein folding" using a lab-written code. Some 1 year after the paper was written, the PI could not give me that code, "because it had been lost when an SGI was reinstalled". I don't really trust results in papers unless I can see and hold code and reproduce the author's work, or a highly competent scientist implements their own version (I'm no good at reading papers and writing code to implement it, then run through all the steps of reproducing the original paper.)

Another enlightening moment was when a more senior grad student told me: make sure everything you do ties back to medical research, even if the relationship is extremely distant. You can get money from NIH from curing senator's family's diseases (cancer and heart disease).

When I was finally starting to apply for funding on my own through the NIH R01 grant program, I was turned down, without a score (meaning it was worthless and never should be funded). The next year, I was on the study section for that grant section and saw several more experienced PIs submit proposals that were very similar to (likely copied from) mine, and they were funded. I later learned I needed to spend several years reviewing grants before I knew enough to write a successful grant (oh, and make friends with everybody else in the study section, too).

On another study section dedicated to funding moving academic data and compute to the cloud, I turned down several grants because they asked for money for closet (on-prem) clusters. I was not asked to return, because the people I turned down were influential.

Basically, as has been pointed out many times before, the incentive system in academia is perverse and does not help people like me who just want to do high quality research but take our time to get the details right, and not get in competitions with other, more aggressive scientists. Many of us self-select out of science and end up as computer engineers or ML engineers or whatever in industry.


> This is pure experiment

Pedantically speaking, it is not an experiment, it is an observation. Experiment is a kind of study where you control independent variable. In this case no cosmic radiation nor seismic activity were not manipulated by scientists. It is the reason why they speak about correlation but not causation.

https://en.wikipedia.org/wiki/Experiment#Observational_studi...


Randomly sifting through data in search of patterns is not an experiment in the usual sense. With a big enough data set, you're guaranteed to find one in a billion, one in a trillion events by random chance.


Yes, but you can test those signals against future data and see if they are accurate.


Finding correlations is not the same as finding one in a trillion events.


When you have a trillion possible correlations, it is.


It's fine if they account for the number of tests they have made when they calculate their significance levels. If they just kept on trying different options until they ended up with p < 0.05 it's almost guaranteed to be noise.


They used p<0.001. It is not social sciences, there anti-noise filters are stricter.


Ah that's not too bad. Though to be fair you also need the data size. That's only what one in a 1k chance (or is it 10k? Too lazy to count it out). If their dataset is small or they automated testing cofactors there's still a decent chance of false probability.


Fishing expeditions are fine, they are not wrong by themselves, the just require followup.

Now that somebody found a signal that seems to predict earthquakes, start trying to predict earthquakes. Dig into the signals and gather new ones to try to see if there is any more information available.


Agreed. And it seems so obvious that it makes me wonder why this doesn't include that. It says they started in 2016; they couldn't wait another year to say "we spent 7 years identifying this trend, and then a year predicting major earthquakes"?


Precursor signals for earthquakes is a field that has about as much credibility as astrology, from the few conversations I've had with professional geologists and planetary scientists. I had a minor and short-lived role in designing a sensor network that might be useful for these kinds of low-signal, high-noise precursors, and I got nothing but strange looks from actual geologists.


There are plenty of precursors for an earthquake. Or maybe there are none. But it doesn't much matter because timescales of earthquakes can be thousands or more of years. While timescales of data is about a hundred years.


> as much credibility as astrology

If this sort of scientific astrology ends up providing precursor signals for future earthquakes, why would geologists have a problem with it?


You're assuming an a priori bias. Astrology didn't lose favor because of some societal decree, it lost favor because it had no predictive power. It seems to be the same with analysis of precursor signals, and the subfield apparently went a little nuts and started to sound like quackery and data fudging.


Were geologists skeptical of the actual data from your sensor network?


(editing to hit a moving target)

We started the study with a good faith survey and intent to use space and ground signals in sync to detect impending earthquakes on nearish term. We had good technologists and scientists. As the study went on it became clear that all the scientists were completely unconvinced. Sure we could have built out the network but it wouldn't have done much. Most the scientists either lost faith or write dissenting opinions as part of the report. We did our job putting out the report but it was clear there was nothing to do after.


> One thing that troubles me in the paper is that the researchers appear to have gone looking for precursor patterns in an ad hoc way, with no physical theory in mind, just trying different binning techniques and delays until they got a signal.

I believe in particle physics this is known as the “look-elsewhere effect”. Basically if look long and hard enough for a pattern, you will eventual find it if your parameter space is large enough: https://en.m.wikipedia.org/wiki/Look-elsewhere_effect


When I was taught statistics, this was regularly brought up as a big no-no in science. However, I read a guide to practical statistics that had a gem about predicting the stock market. If we discovered surges in the market correlated to newspaper sales, we wouldn't discard this as look-elsewhere. In fact, we'd follow newspaper sales very closely.

Predicting earthquakes has a big upside for humanity. If there is even a small correlation -- even if we don't yet understand it -- we can benefit from it.


It can be useful, it's just you need to keep monitoring it: if the effect seems to shrink or disappear on future tests, it's probably spurious.


In GWAS studies there's a nice visualization for dealing with this called Manhattan Plots. https://en.wikipedia.org/wiki/Manhattan_plot

Basically if you test a lot of hypotheses plot all the p-values and look to see if there is something that is a true outlier or whether it is expected given so many tests.


Most of particle physics is people searching for jobs that give meaning to their life. This often biases them against finding results against meaning.


Not to mention that they are themselves made of the particles they study, which ineradicably biases their outlook.


Never be made out particles. That's true.


This is related to the fact that in a higher-dimensional space the hamming distance between two points is compact, compared to 2d/3d intuition. This problem is related to the way that e.g. embedding text into an AI for comparison purposes often produces surprisingly closely related vector distances for relatively unrelated strings.


The obligatory XKCD: https://xkcd.com/882/


Can we transfer this to the financial markets?


> One posited mechanism is that stuff happens deep in the earth that affects the magnetic field first, and then kicks off some big earthquake.

If this was real, would it not be easier to measure the perturbation of the magnetic field than measure the cosmic rays being let in by the perturbed magnetic field?


The relationship between cosmic rays and Earth's magnetic field is well studied, and actively monitored: http://www.geomag.bgs.ac.uk/education/earthmag.html

However, it's a more complex data set to work with, given the vast dimensional dynamics of the field itself vs measuring the flux of radiation.


This was my thought as well since it seems rather obvious I wonder why it wasn't the approach taken.


>One thing that troubles me in the paper is that the researchers appear to have gone looking for precursor patterns in an ad hoc way, with no physical theory in mind

Isn't this the fundamental mystery about all science, in a way? You have to start with some kind of hypothesis, and that hypothesis likely comes from looking over various kinds of data and doing mysterious mental convolutions on them.


Hypotheses non fingo[1]. It's pretty cool how an extraordinary genius had extraordinary intellectual humility. And by humility I don't mean he was easy to get along with or self-deprecating, just that he knew his limitations. His groupies were another story of course.

[1] https://en.wikipedia.org/wiki/Hypotheses_non_fingo


Why is 'sunspot activity driving changes in the Earth's dynamo that then kick off earthquakes' a loopy idea?

We are hours away from coronal mass ejection catastrophe right now as solar cycle 20 builds in intensity. It seems reasonable to me that the complex relationship between the moon's gravitational pull and massive sun activity could affect our tiny little planet


The idea is loopy because the energy that reaches the earth from even a massive solar flare is orders of magnitude less than the energy released by a major earthquake, and that is orders of magnitudes less than the energies that drive the dynamo in the Earth's core.

It's possible for a falling leaf to hit a mountain in just such a way that it dislodges a boulder balanced on top, but you need to tell a pretty compelling story about why this sensitive arrangement came about. Similarly, you'd need to explain how gigatons of molten iron sloshing around deep underground might feel the kiss of the Sun in just such a way that it levels San Francisco (for example).


In fact i hypothesise they've inverted cause and effect.

Tectonic movements absolutely have the energy to move the earths magnetic field and the magnetic field blocks cosmic radiation. Something that moves the magnetic field would allow more cosmic radiation

There's only a handful of detectors outside of earths magnetic field. Orbiting satellites are even within its field of influence. Comparing this data to the measure of cosmic radiation from a deep space probe would be interesting to rule out that it's the earth movement increasing detected radiation and not the reverse

They may have simply discovered that tectonic movement changes how much cosmic radiation reaches our detectors.


From the article, that is in fact their explanation. There's just possibly some other data that could point the other way, but it's not heavily emphasized.

There have been other studies that showed weird correlations to ionospheric activity and earthquakes, but only ever in retrospect.


What you wrote, is a good place to start future observations. There is a lot of unexplored dynamics to investigate, in the earth. Roiling hot fluid and gaseous systems in flux, which are magnetic - due to iron (and other minerals), is a fluid dynamics dream subject. But, how to finance? In today's research systems driven by funding to support specific answers, instead of expanding knowledge just to see what we learn.

Ps. An attempt at modeling the inner Earth's systems and flows might be more useful for earthquake prediction.


I hypothesize you didn't read the paper, which you've instead derived from first principles.


Tectonic plates are nearly permanently in a "sensitive arrangement", as you say. Compressive, shear, and tensional stress is the normal. Plates accumulate more and more stress over time, until a small trigger causes the plates to slip and release all that energy at once (earthquake).

In other words, tectonic plates are nearly permanently in a state similar to a boulder delicately balanced on top of a mountain.


Right, the potential energy due to stress in the crust could be released to kinetic energy by a falling feather or some cosmic rays, would be the idea. There doesn't need to be the same energy in cosmic rays as released by the earthquake.


A 4oz pull that moves a trigger a small faction of an inch can release a thousand foot pounds of energy.


Yes, but we're talking about a system in which thousands of 4oz pulls are happening in different directions at any given moment. You model doesn't only have to explain why a particular pull triggered it, but also why the other thousand didn't.

Throwing a cocked handgun into the dryer is different than a leaf falling on it.


This guy put a cocked handgun into the dryer and it didn't go off: https://youtube.com/watch?v=7es3zYRYLTs


I think you can relate this to the dynamics of a Prince Rupert's drop.


Consider how an atomic bomb requires relatively little energy input compared to what is released. Combined with how very little practical observation we have of what's really going on in the Earth's core. It is certainly possible some phenomena is in play with Earthquakes.


Atomic bombs don't simply assemble themselves naturally though.


Not exactly atomic bombs, but we do have evidence of naturally occurring, self sustaining nuclear reactors having operated for potentially hundreds of thousands of years in Earth's past (https://www.wikiwand.com/en/Natural_nuclear_fission_reactor)


Yes, and there is a nuclear fusion reactor above our heads right now, but that is similarly irrelevant when talking about bomb self-assembly.


That's a fair point, I just thought the natural fission reactors were cool while completely forgetting the one in the sky responsible for my ability to think that haha


I know. Are you suggesting we have already discovered everything there is to know about our planet / this universe?


I'm suggesting we know enough to rule out the spontaneous self assembly of nuclear bombs. And there's probably a few other things we can rule out by the same logic.


when people say "orders of magnitude" do they always mean base 10?

(i'm thinking of how decibels are a log scale sort of thing wrt power, where "orders of magnitude" used as a cliche probably does not mean what it would be read as.)


The wikipedia article was actually pretty interesting on this point. They suggest that 10 is commonly used, but other bases may be contextually relevant.

> An order of magnitude is an approximation of the logarithm of a value relative to some contextually understood reference value, usually 10

I guess you could think of it like a _really_ low precision float or something.

1: https://en.wikipedia.org/wiki/Order_of_magnitude


I suspect the phrase is a cliche often used to sound scientific and sometimes by folks unaware, like how description of growth as “exponential” is a cliche used by non-mathematical discussion.


I agree with this. As a layman, I've always understood "orders of magnitude larger" to just mean "way too big" and "exponential" growth to imply "out of control".


x^2 and 2^x manifestly both involve exponents, so I think it's valid - outside math class - to call anything involving accelerating growth "exponential".


I interpret it as saying: exponentially different, where the range in my uncertainty is comparable tonthe effect of the choice of base.

For instance if the range is '3-13' in base 2 it's similar to '1-4' in base 10, but either way I'm making up numbers so who cares what the base is.


> when people say "orders of magnitude" do they always mean base 10?

Yes, and that has always bothered me just a little bit, given that there is nothing intrinsically special about base 10 and yet the phrase seems to suggest something fundamental.


Base 10 is certainly the default. Decibels are base 10; more specifically a Bel is 1 factor of 10, and a decibel is 1 tenth of that, ie 10^0.1.


fractional exponents don't work like that, for example x^½ is the square root of x. You probably meant 10^-1 !


No, 1 decibel is 10^1/10. Bels are multiplicative, not additive. You multiply 10^.1 by itself 10 times and you get 10^1. Similarly if you multiply x^1/2 by x^1/2 you get x.


I usually interpret it as "scales geometrically and not arithmetically".


That seems confusing. If I say my new service is orders of magnitude more efficient than previous services, I don’t mean any thing about scaling but current performance, and I wouldn’t say orders of magnitude if it was just twice as many calls/second/core, but more than ten, or really more than 30, halfway between ten and a hundred, logarithmically.

Something that scales geometrically might well have some giant constant so it isn’t useful until a specific performance regime.


i think of "orders of magnitude" to mean "powers of ten", but I asked since i'm not sure i'm reading what folks think they are saying.


What do you mean by, "we are hours away from coronal mass ejection catastrophe right now as solar cycle 20 builds in intensity"?

Because i can't read that as anything other than, we are in solar cycle 20, and it's currently building such that i predict with hours of right now, there will be a coronal mass ejection that will knock out the global power grid. But I've googled and we appear to currently be in solar cycle 25 with solar cycle 20 occurring in the 60s and 70s. Do you mean that a coronal mass ejection event takes hours to get to earth? Then why the "right now"? Does the delay of such events change on a sufficiently short time scale to warrant "right now"?

Nobody else seems confused so maybe I'm an idiot


Catastrophe as in many possible earthquakes, if the hypothesis is correct?


>Catastrophe as in many possible earthquakes, if the hypothesis is correct?

No, wiping out the entire electrical grid and all electronics

https://en.wikipedia.org/wiki/Carrington_Event


On certain days, I really think this would be not such a bad idea. I like to call those days Monday


Reminds me of: https://www.tylervigen.com/spurious-correlations

(There seems to be a lot of death-related stats in there, so beware if that's triggering)


> One thing that troubles me in the paper is that the researchers appear to have gone looking for precursor patterns in an ad hoc way ... just trying different binning techniques and delays until they got a signal.

That's just p-hacking.


Thank you! That's the term I was looking for but couldn't remember.


I feel like from my reading given they’re ground based stations that the major earthquake forming causes changes to the magnetosphere making cosmic ray activity appear to increase from the ground.


Newton observed that the apple fell from a tree.

He then noted that it was the apple that moved, not the Earth.


Given the planet is one big ol' magnet, it seems at least plausible to see interactions?


if the mechanism is true, could this information be used by humans to control earthquakes? There have always been a lot of theories around projects like HAARP that have the capability to send a lot of energy deep into the earth.


Pumping lots of water into the ground is a pretty reliable way to get earthquakes. Notice that Oklahoma, which used to be seismically inert, is now a bright red spot on the USGS map of the United States. This is due almost entirely to petrochemical-related human activity.


Oklahoma is not seismically inert. The region has active fault systems that have always generated earthquakes. Injecting fluids into faults can trigger earthquakes, essentially pulling future earthquakes forward in time.


I think you vastly underestimate the scale and energies involved in geological processes.

HAARP is a ~4 megawatt radio transmitter. That's it. It doesn't have mysterious unknown abilities to send energy deep underground.


Theorising I've seen on this - though, I don't recall where - suggested targeting resonances with HAARP in a similar way to how a human singer can shatter a wineglass, by projecting the crystals frequency, causing it to oscillate itself to pieces.

That is high-watt transmission power may not be required. And further, it was suggested it's not done in isolation by HAARP but cooperatively with various transmitters also transmitting the same frequency at the same target - using standing waves.

It makes sense conceptually as an idea but I'm not sure there if there's any evidence of it?


The HAARP conspiracy theories are fun and date back to the old days of conspiracy theories when people wore tin foil hats and worried about government mind rays. Not at all based in reality or sensible but at least those conspiracies were harmless and non-political.

Anyway... no there's no evidence. But i do miss these classic conspiracy theories! Anyone remember the X-Files episode featuring HAARP?


So, the thing is tin foil hats do protect against the government mind rays; but aluminum foil hats don't. And too many people are wearing aluminum foil hats, and then the government mind rays work, and then they spout different conspiracy theories, planted by the government mind rays. And those conspiracy theories get traction, because they're spouted by people in metal hats.

It all comes back to the mind rays.


Maybe not control but the ability to have an advance warning would be a huge gain in terms of public safety.


“There may or may not be an earthquake somewhere in the world, including oceans, in 15 days” is not a very useful warning though, if not harmful.


If the increase in cosmic ray detections is due to some change in Earth's magnetic field, presumably where and how the magnetic field changes would in some way correlate with the activity.


If the magnitude of the quake can be estimated from it, it can be somewhat useful for response agencies in earthquake and tsunami prone areas to make sure their ducks are in a row. Kind of like how storm season tends to make people make sure their emergency supplies are in order, even if their area doesn't tend to get hit by big storms.


It would be better than anything else we have at the moment


From https://en.wikipedia.org/wiki/List_of_earthquakes_2021%E2%80... there were 44 earthquakes with a magnitude >= 7.0 in the last 2.5 years. That is one every 17 days. The GP is almost correct: There is a high chance [1] of an earthquake somewhere in the world, including oceans, in 17 days.

[1] If you want to be fancy with probabilities, 64% of at least one earthquake. In some 17 days periods you will get no earthquake and in others you will get more than one earthquake. In average you will get one.


Too late to edit: The correct number is 21 instead of 17.

The 64% is still 64% because if N is big enough it almost doesn't depend if you consider something with a probability of 1/17 in 17 days or something with a probability of 1/21 in 21 days.


It really isn't? There is no specificity, it also doesn't state that earthquakes _only_ happen after solar activity. So this is equivalent to saying "There could be an earthquake somewhere at any time in the future".


With distributed ray detectors and suitable modeling of inner earth processes, assuming that the premise is correct - seems it may have potential to work?

Ie, maybe able to generate the level of specificity required.

EDIT: Also there aren't that many places on earth at high risk of earthquake that also have poor construction, etc. Meaning any advance warning, that a significant quake may hit somewhere, can trigger "battening down the hatches".


You'd have to get a theory of why the rays correlate to earthquakes to have a usable model.

Realistically, if you live in a place with poor construction and relatively large earthquake risk, the sensible thing is to always have your emergency kit ready to go. Chances are, you have other infrastructure issues anyway, so keeping a week or so of emergency rations and water available might come in handy more often than somewhere that has tigher building codes (and enforcement). It would likely be hard for everyone to prep within the same two week window anyway. But, if it was quite specific and accurate, relief organizations could begin staging and start traveling to be closer and quicker to respond.

Doesn't seem like it's anywhere close to that from descriptions in this thread.


Personally, I prefer waiting for all of the animals to seek higher ground as my seismic activity indicator.


Seems figuring out what it is they're doing and duplicating it would be a straightforward thing to do. Surely there's a research group working on it?


“Fringe scientists” have been saying that cosmic and solar particles have been causing earthquakes for a long time.

It seems counter intuitive but an analogy is very simple:

A 15 mph breeze will do almost nothing to push a person, but will exert tremendous force on a sail of sufficient size.

Earth, in this analogy, is a really big sail.


The counter-argument is equally simple: a sail is a surface. The earth is a sphere. A sphere made of solid rock the size of a sail will be even less moved by a 15 mph breeze than this hypothetical person.


We aren't talking about the earth being blown off course though, just that there is a large amount of energy that was previously unaccounted for and that this energy is large enough to trigger an earth quake at vulnerable parts.


Do the back-of-the-envelope computation of how much energy reaches Earth from a coronal mass ejection and see how well your idea holds up. (a typical flare is 10^16 grams of matter moving at about 450 km/sec)


Freezing water>ice shatters a cast iron pipe. It's just about the creation of a crack to initiate stress relief.


You don't need a ton of energy to set off a possible chain reaction events.


CMEs reaching earth likely account for a mere fraction of the energy flowing through the global electric circuit at any moment.

Moreover, your suggestion seems to indicate that you’re only considering kinetic energy, which would be a very myopic and reductionist take on these matters.


you're missing the analogy. the surface area of the sail is the entire volume of eg piezoelectrically responsive rocks - so it's a gigantic sail


>The earth is a sphere. A sphere made of solid rock [...]

The earth is NOT a solid rock. It's a very thin layer of rock, floating on top of a pool of magma, itself floating in space. The rock itself flows on all sorts of time scales as well.

A sphere will be forced by the wind, almost the same amount as any other object, regardless of its density or weight.


> It's a very thin layer of rock, floating on top of a pool of magma

it's not a pool of magma, the mantle is basically solid

(except on very large timescales)


So it will work for all historical earth quakes :) Sounds a bit like some climate research. Tune model parameters until it perfectly match historical data. This should be possible to debunk in shorter time though




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: