How to communicate with aliens through Twitter

Okay, I’m not exactly going to write about how to communicate with aliens, but rather, I want to discuss some estimates of how many potentially habitable planets might exist, and likewise, the number of alien civilizations with which we (humans) could potentially communicate. As for Twitter, you’ll see where I’m going with that shortly.

Let’s discuss the number of potentially habitable planets first. First, we have to recognize that the cosmic time scale is very large, and that in the lifespan of the galaxy so far many potentially habitable planets may have already popped into and out of existence. The ones that we want to include are only the ones that currently exist. To estimate how many exist at any given time, we’ll use this formula:

number of habitable planets at a given time = (rate of formation of planets)(average habitable lifetime of a planet)(% of planets that are habitable)

Okay, seems simple enough. All we have to do now is figure out what numbers the three parameters on the right should be. Let’s look at the rate of formation of planets first. Since astronomers haven’t been able to directly observe many exoplanets, there isn’t much data to provide an estimate. However, we can rewrite this parameter as:

rate of formation of planets=(rate of formation of stars)(number of planets per star)

The rate of formation of stars can be estimated as follows:

rate of formation of stars=(number of stars in galaxy)/(lifetime of the galaxy so far)

This of course assumes that the rate of formation of stars has been constant over time since the birth of the galaxy, which is not strictly true, but the difference shouldn’t throw off our estimate too much – at least, not compared to the margin of error on the other parameters we’re about to estimate (eg, the number of stars in the galaxy). Anyway, ballpark estimates for the number of stars in the galaxy and the lifetime of the galaxy both exist, so we can stop there.

As for the number of planets per star – this is more problematic. As mentioned earlier, exoplanets are difficult to observe, and not much empirical data exists on them yet. Certainly not enough to accurately estimate the number of planets per star. However, we do have one sample point – our solar system – from which we can extrapolate. Extrapolation is, of course, a risky enterprise, but in this case there isn’t much alternative. Our solar system has 8 planets (sorry Pluto), but unfortunately, not all solar systems have only one star, so even if we extrapolate that all solar systems with one star average 8 planets, we can’t say the same for solar systems with two or more stars. So we’re going to have to split our estimate for the number of habitable planets into two separate estimates: One for solar systems with one star, and one for binary systems (systems with more than two stars are rare enough that we’ll neglect them).

Our equation for the number of habitable planets at a given time is now becomes slightly more messy, as we include the fraction of non-binary systems and the fraction of binary systems. It’s estimated that 2/3 of systems are single star like ours, and about 1/3 are binary systems. We still don’t have any estimate of the number of planets per binary system or the fraction of binary planets which are habitable, but it is estimated that 50-60% of binary systems do allow for planets to have stable orbits in a habitable zone, so let us assume (based on our own solar system) that every binary system that allows them, has them. This allows us to conservatively replace these two parameters with the fraction ½.

So far we’ve only clarified the first of the three para meters that we’d originally set out to calculate. Fortunately, the second parameter, average habitable lifetime of a planet, won’t be too difficult. Since there’s no data other than our own planet, again we’ll use the Earth and extrapolate. The Earth is about 4.5 billion years old (sorry creationists, using 6000 messes up my equations), life is thought to have started about 0.5 billion years in, and the Earth is expected to remain habitable (not necessarily for humans) for another 2.3 billion years. Thus the habitable lifetime of the Earth is 6.3 billion years – not too bad, considering the universe is only an estimated 13.7 billion years old, and our galaxy is only about 200 million years younger.

The third parameter – the fraction of planets that are habitable – is also relatively straightforward. Most of the solar systems in the galaxy are in a non-habitable zone (too much radiation from neighboring stars) – our own system remains in a habitable zone by staying between the spiral arms of the Milky Way as it revolves. It is estimated that only 1/10th of the solar systems are isolated enough to harbor life. For non-binary systems like our own, we’ll assume that the average number of planets is 8 and the fraction of habitable planets is 1/8 – multiplying them together gives us 1, which makes sense since as far as we know the Earth is the only habitable planet in our solar system. There is speculation about some of Jupiter’s moons, but if we want to include moons we have to go back and open a whole new can of worms. So forget moons for the moment. For binary systems, we already assumed that the number of planets per binary system times the fraction of habitable planets per binary system was ½, so we’ll stick with that.

Before we go any further, I want to address some potential criticisms over taking the fraction of habitable planets in single star systems as 1/8. Obviously this estimate has quite a bit of uncertainty, due to the fact that our solar system is the only data point in our sample. However, some might go even further and say that it is impossible to extrapolate from our own existence that any other planets with life exist, since if we did not exist then we would not be able to ponder such questions in the first place. However, I do not identify with such anthropocentric arguments – historically, anthropocentric arguments have been overturned, and I have no inclination to assume that we occupy any privileged or unique position in the universe simply by mere virtue of our existence.

And now, the moment of truth – our estimate. Let’s be conservative about the number of stars in the galaxy, and say that there are 100 billion. In that case, our estimate for the number of habitable planets currently in existence is (100 billion/13.5 billion)(6.3 billion)(1/10)[(2/3)(8)(1/8)+(1/3)(1/2)]=3.9 billion worlds that could harbor life.

Does that sound too high? If there are so many worlds that could harbor life, why haven’t the aliens made contact? Well, first of all, one should note that a world that could potentially harbor life may not necessarily do so. Second, even if it does harbors life, that life may not be anything other than rudimentary forms of life such as bacteria. Third, even if complex life exists, it may not necessarily be intelligent. Fourth, even if intelligent life exists, it may not be technologically advanced enough to communicate. Or, equally if not more likely is the possibility that the intelligent life may be so technologically advanced that we lack the technology to receive its messages. For instance, it is very likely that a civilization moderately more advanced than ours would send interstellar messages using neutrinos rather than electromagnetic radiation, as neutrinos travel only slightly slower than light but have the advantage of being electrically neutral, which means that neutrino signals would not weaken from interference nearly as much over interstellar distances.

Let’s try to quantify these numbers just a bit, and estimate the number of extraterrestrial civilizations that we could potentially communicate with. First, there doesn’t exist any data on what percentage of worlds that could potentially harbor life actually do so, whether that life eventually becomes complex life, whether that complex life eventually becomes intelligent life, and so on. All we have to go on is what happened here on Earth, and we know that all of these things happened on Earth. So, despite the obvious uncertainty, I will extrapolate that any world that can potentially harbor life does, that rudimentary life eventually becomes complex life, that complex life eventually becomes intelligent, and that intelligent life will always leak electromagnetic radiation into space for some period of time. All of these fractions are therefore set to 1.

However, the human race lacks the technology to communicate with most of the galaxy – our strongest radio signals spread out in space and become too weak to pick up more than a hundred or so light years away. I won’t assume that an alien species will make more of an effort to pick up our signals than we do to pick up theirs – in fact it is rather arrogant to think that an alien civilization would undertake the expense of building arrays of kilometers-wide dishes on the off-chance that they would be able to watch some of our TV shows.

But even supposing that we effectively broadcast our messages up to 500 light years away – the galaxy is 100,000 light years across, which means that the fraction of it that we can communicate with is at most roughly (500^2)/(50,000^2)=0.0001, or 0.01% of the galaxy. And even supposing that there were an alien civilization within our small broadcasting vicinity that wasn’t leaking radiation but was receiving our communications and responding – they would have to be within 35 light-years, or 0.5% of our broadcasting area, for us to have received a response by now. In fact, the percentage should be even smaller – for the galaxy I made the simplifying assumption that the galaxy is basically flat, but for the purposes of our tiny broadcasting range it makes more sense to think of it as a sphere, which means that the alien civilization would have to be within 35^3/500^3=.0003, or 0.03% of the volume of our broadcasting sphere for us to have received their responses. For this reason it is far more likely that we would receive a transmission from an alien civilization sent to anyone capable of listening rather than a directed response.

Here on Earth, no systematic attempt at purposely directing more powerful transmissions to alien civilizations has been made (for reasons which I will soon elaborate), so I will assume that alien civilizations are not likely to expend much effort on such tasks either – that if we were going to receive any signals from alien civilizations they would most likely be unintentionally leaked broadcasts, like the vast majority of ours. Which in turn means that if we were going to pick up any such broadcasts, we almost certainly would have already.

Let’s examine the search for extraterrestrial life from the transmitter’s perspective. If we humans ever decided to construct a transmitter to contact extraterrestrials that could transmit messages that we ourselves could receive, the cost of such an enterprise would be – for lack of a better word – astronomical. At least for trying to send an actual message. Rather of sending actual messages, it would be more economical to construct a Benford beacon – that is, a transmitter that pings (ie, “tweets”) various star systems with short but steady bursts of electromagnetic radiation that are obviously not produced by any natural sources, rather than trying to send actual information, as the latter is much more expensive. Then, in the event that an alien civilization actually picks up on it and focuses in, we could also send an actual message along with it on a sideband at a lower power. Extraterrestrials focused on minimizing costs by using a Benford beacon would be transmitting at frequencies higher than what SETI (Search for Extraterrestrial Intelligence) is currently examining.

It should be noted that even with a Benford beacon, it would cost about $200,000 per light year of coherent pinging – or $20 billion dollars to send a single ping to a star across the galaxy. To send an actual message across the galaxy would cost $7 trillion. Regardless of the conversion to alien currency, somebody would be paying a lot of long-distance fees, and there certainly wouldn’t seem to be much return on an investment sent halfway across the galaxy – for closer distances, perhaps. In any case, a realistic transmitter certainly would not transmit the types of messages that SETI hopes to receive – nor would SETI find the type of message that we ourselves would transmit – which is a rather damning indictment of the SETI program.

The only other parameter that we must estimate is the length of time for which another civilization could communicate with us. We have only been able to broadcast messages to space for about 70 years – less than the blink of an eye from the cosmic perspective. In fact, even if you were watching from the cosmic perspective without blinking, you would still miss it.

One might argue that the human race could potentially be transmitting strong signals into space for centuries to come, but if you look at the trend of recent technologies, we are discovering how to send messages more directly and efficiently, which means that the Earth is moving more and more toward radio silence. Furthermore, our future efforts toward interstellar could very well rely solely on neutrinos or other exotic particles better suited for interstellar communication, rather than electromagnetic radiation. Plus there’s always the possibility that we’ll nuke ourselves to extinction. All of which means that 70 years isn’t an entirely unreasonable estimate of how long an extraterrestrial civilization might be broadcasting the types of messages which we are in a position to receive.

So, keeping the fleeting transience of our existence in mind, along with the depressing limitations of our technology, and applying our calculations from habitable planets to communicating civilizations, we get: (100 billion/13.5 billion)(70)(1/10000)(1/10)[(2/3)(8)(1/8)+(1/3)(1/2)]=0.004 civilizations that we are able to communicate with – which would explain the lack of interstellar conversation. The silver lining is that as the human race advances, people will invent and discover new ways (while perhaps rethinking the old ways) of communicating across the interstellar void, which will increase the possibility of making contact.

Posted in Physics, Space | Leave a comment

Family history

The latter part of the 19th century saw the formulation of Maxwell’s equations in 1862, which completely described the behavior of electric and magnetic fields. In 1879 Edison developed his commercially viable incandescent light bulb. In 1893, Tesla demonstrated wireless communication for the first time. By 1900, with these mysterious forces finally understood and applied, it seemed that the physical world was almost completely understood. The physicist Lord Kelvin famously remarked, “There is nothing new to be discovered in physics. All that remains is more and more precise measurement.”

Within five years Lord Kelvin could not possibly have been more wrong. A revolution in physics began with a small remaining puzzle left in Maxwell’s equations: The equations predicted that light should have a constant speed. The question was, constant with respect to what? It was believed that there must be some sort of aether permeating space, through which light traveled. Unfortunately, attempts like the Michelson-Morley experiment in 1887 had failed to prove such an aether’s existence.

In 1905, Einstein published his theory of (special) relativity, which correctly answered the question: Light moves at a constant speed relative to its observer, no matter how the observer is moving. This revelation had profound consequences: It meant that there was no absolute concept of space or time; everything was relative to one’s motion through space and time. Furthermore, it was impossible to quantify one’s motion through space without also quantifying one’s motion through time; space and time could only be understood as a single entity, spacetime, and motion through spacetime could be no faster than the speed of light. When applied to electricity and magnetism, relativity showed that electric forces could be seen as magnetic forces and vice versa, depending on one’s motion through spacetime, which meant that it also no longer made sense to speak of electricity and magnetism separately; the two forces were united as electromagnetism.

Later that year, Einstein also showed that his theory of relativity meant that mass could be measured in terms of energy, and vice versa. Because light moved at the cosmic speed limit, it had no mass, thus it was now understood that light was pure energy, and how much energy light carried was dependent only on the frequency of its wavelength. Einstein’s famous equation, E=mc2, demonstrated that if matter were somehow converted to energy, correspondingly large amounts of energy would be produced. In 1920 Arthur Eddington (correctly) proposed that such a conversion might be the mechanism by which stars (and by extension, our sun) produce their energy. From there, one might naturally wonder whether such a conversion process could be replicated on Earth.

My grandfather was a semester away from graduating New York City College during World War II with a degree in chemistry when he was drafted to the army. He completed basic training in Fort Hood, Texas, and then one night was abruptly woken up in his room, told to pack his bags and to be ready to board a train. On board the train he met a number of other people from the base, many of whom also had advanced educations in chemistry and physics. The men on board the train quickly surmised that they had been chosen for some sort of scientific undertaking (although there were jokes among themselves about how they might have been chosen for an elite fighting unit), but had no idea what the nature of the project might be until they reached their destination at Los Alamos, New Mexico, where they were taken into an auditorium and debriefed by Dr. Oppenheimer himself.

The project of course was the Manhattan Project, the effort that led to the creation of the first atomic bomb, and my grandfather was chosen to work on the project partly because of his experience with photography, a minor of his in college. In order to achieve the explosion that was created with the “Fat Man”, the bomb detonated over Nagasaki, its plutonium core needed to be compressed to a supercritical density, a point at which the plutonium nuclei would begin to break apart into their constituent protons and neutrons, which in turn would cause other plutonium nuclei to break apart, initiating a chain reaction and releasing large amounts of energy in the process. The plutonium core had to be compressed in a perfectly symmetrical manner, however, or the chain reaction would not take place. To achieve such a precise compression, conventional chemical explosives needed to be molded very carefully around the plutonium core. As it turned out, this was not a simple task, and a lot of prior testing was required to ensure a symmetrical implosion.

To run tests for symmetrical implosions, chemical explosives were wrapped around metal pipes prior to detonation, and the resulting deformation of the pipes was studied to determine whether the implosion was symmetrical. Of course, early tests did not produce symmetrical implosions. High-speed photography would have made it possible to determine the cause of the asymmetry; unfortunately, no camera in existence at the time had a shutter speed fast enough to catch the pipes in the process of deforming.

Film cameras work by depositing light onto a light-sensitive film, which is then developed in a dark room to produce the final photograph. The shutter speed controls the length of time that light is allowed to strike the photographic film, and usually involves a mechanical apparatus that opens and closes. Prior to the 1940’s, the fastest cameras had a shutter speed of about one thousandth of a second, which was far too slow for what was required at Los Alamos.

However, an engineer named Harold Edgerton had been independently working on a camera, now known as the Rapatronic camera, which could achieve shutter speeds on the order of 10 nanoseconds, one hundred thousand times faster than any predecessor. His camera used a shutter that relied on the polarization of light.

As a consequence of Maxwell’s equations, a photon of light can be viewed as a pair of oscillating electric and magnetic fields. If the electric field is oscillating in a particular direction when viewed from head-on, then the light is said to be polarized in that direction. It is possible to create a polarized beam of light by passing it through a filter of very thin conductive wires running in parallel. If the light has any component of polarization in the direction of the wires, it induces a small electric current in the wire, causing the light to scatter; thus only light that is polarized perpendicular to the wires passes through.

Certain liquids, such as nitrotoluene and nitrobenzene, are able to change the polarization of light passing through when a high voltage, or a large current, is applied. Edgerton’s camera used a cell of polar liquid sandwiched between two perpendicular light-polarizing filters to achieve its high shutter speeds. With both filters perpendicular, no light passes through. To take a picture, a voltage would be applied to the liquid cell. As current pulses through the liquid, light passing through the first filter is rotated the necessary ninety degrees to pass through the second filter and strike the photographic film. The chemical change in the polar liquid lasts for approximately ten nanoseconds, after which time the shutter is again closed.

My grandfather’s understanding of photography allowed him to participate in the process of replicating Edgerton’s camera and studying the deformation of pipes necessary to achieve a symmetrical implosion, which was critical for the development of the atomic bomb. Rapatronic cameras were also used to take pictures of the atomic bomb tested at Trinity in the first few milliseconds of its explosion. My grandfather has a number of such photographs in his possession.

Posted in Physics | Leave a comment

Calculating retirement

Suppose you start with a principal P, which you place into an account with an interest rate of r per year, and you add x dollars to the account every year for n years.  Then we could write:

a(0) = P

a(1) = P(1+r)+x

a(2) = a(1)(1+r)+x

.

.

.

a(n) = a(n-1)(1+r)+x

By following the pattern backward, we can conclude that a(n)=P(1+r)^n+x[(1+r)^(n-1)+(1+r)^(n-2)+…+1], which makes sense since it’s just the principal times (1+r) for n years, plus the x dollars you put into the account each year, each of which gets compounded for a shorter and shorter number of years.

The series (1+r)^(n-1)+(1+r)^(n-2)+…+1 can be summed as follows:

S = (1+r)^(n-1)+(1+r)^(n-2)+…+1

(1+r)S = (1+r)^n+(1+r)^(n-1)+…+(1+r)

S-(1+r)S=S(1-(1+r))=S(-r)=1-(1+r)^n

=>S=((1+r)^n -1)/r

Thus a(n) = P(1+r)^n+x[((1+r)^n -1)/r], which is a nice explicit formula.

Now, when you retire, suppose you’d like to live on y dollars per year.  As you withdraw money, the remaining money in the account will still accrue interest, so we can write the years after retirement as:

b(0)=a(n)-y

b(1)=(a(n)-y)(1+r)

b(2)=(b(1)-y)(1+r)

.

.

.

b(n)=(b(n-1)-y)(1+r)

The series b(n) can be summed in basically the same way as a(n), so we conclude:

b(n) = (a(n)-y)(1+r)^n-y[((1+r)^n – 1)/r]

This is all very good; we can use a(n) to simulate your account building to retirement, and b(n) to simulate your account after retirement.  However, we can do even better.  After a certain point, your account will have enough money in it that the interest it accrues will be greater than the money you need to withdraw from it.  So you really only need to save enough money for the interest on your retirement account to be equal to the money you are withdrawing from it.  When you reach this equilibrium, b(1)=b(2)=…=b(n)=a(n).  So let’s look at b(1)=a(n):

(a(n)-y)(1+r) = a(n)

After a little bit of algebra, we see that y = a(n)/(1+1/r).  Since a(n)=P(1+r)^n+x[((1+r)^n -1)/r], we plug this in to obtain:

y = {P(1+r)^n+x[((1+r)^n -1)/r]}/(1+1/r), which tells you how much you can withdraw from your account, given x, r, P, and n.

From this equation we can also solve for x:

x = [r/((1+r)^n – 1)]*[y(1+1/r)-P(1+r)^n], which tells you how much you should put into your account, given y, r, P, and n.

Let’s do an example.  Suppose you are 22 and have no savings, and you’d like to retire on 100k per year at age 65.  How much money do you need to put into your account per year?  I’ve read that the stock market has historically returned an average of 11 percent (without inflation), but let’s be conservative and assume r=0.07.  Using n=43 and y=100,000, we obtain x=6169.15.  So to retire on 100k per year by age 65, you’d only need to save ~$6,200 per year.  Also, y varies directly with x, so to retire on $50k you’d have to save $3,100 per year, etc.

Of course, this doesn’t consider taxes (which you can adjust for by subtracting from y) or inflation, but I think it’s still a pretty good approximation.  A note about inflation:  In order to have the equivalent of $100,000 spending power by 2053, you would have to increase the amount of money that you add to the account in accordance with the rate of inflation.  To do this, multiply 6169 by 1+I per year, where I is the rate of inflation.  Historically, inflation has averaged around 3.4%, so by the end of 43 years, you would be putting ~$26,000 into the account in order to keep up with inflation.  However, if wages increase accordingly, $26,000 in 2053 will be the equivalent of $6,200 today.

Posted in Algebra, Financial math, Mathematics | Tagged | Leave a comment

Dark matter

Dark matter is posited to exist in order to explain observed discrepancies in the rotation of galaxies – basically, galaxies rotate faster than you would expect, considering the amount of matter contained in stars and black holes.  But I think that there are several problems with attributing this discrepancy to dark matter.

First of all, as far as I can tell, the dark matter hypothesis doesn’t make any falsifiable predictions.  The best it can do is offer explanations for certain observations, but even then, if the observations don’t match current theories about dark matter, the nature of dark matter can simply be changed to reflect the new evidence.  So no matter how much evidence mounts against dark matter, it can simply take on more and more bizarre properties.

Also, every scientific effort involving dark matter seems to be aimed at proving its existence – which makes sense, considering that it is nearly impossible to prove that something doesn’t exist:  If I say that there are three-headed unicorns, how would you go about proving me wrong?  You can’t, but that doesn’t mean I’m right – the burden should be on me to prove that they exist!  So it is puzzling to me that there would be so much support for dark matter in the scientific community.

The methods that are used to calculate what the theoretical rotation of a galaxy should be are either Kepler’s third law, which is only valid for two body systems (the Milky Way is ~100 billion stars), the Virial theorem, which comes from statistical physics (which also may not give accurate results, depending on certain assumptions), or methods from fluid dynamics (which may not be valid, since the galaxy is not a fluid).  The reason that physicists have to resort to statistical methods to calculate galactic rotation is because the computing power that would be required to simulate an actual galaxy rotating is (for the time being) prohibitive (each star technically interacts with every other star).

Personally, while there are some alternative theories which require slight modifications to Newton’s laws for large distances (known as MOND’s – modifications of Newtonian dynamics), I think that the rotational discrepancy would be explained if only galactic rotation could be modeled accurately enough.  Since the Milky Way is ~100,000 light-years across, I think that the gravitational time delay also plays a significant role in modeling.  MOND theories generally fail for strange cases involving colliding galaxies and whatnot.

I read an interesting paper (http://arxiv.org/abs/0707.2459) several days ago about how in certain globular clusters, which are proven not to have dark matter due to tidal forces caused by neighboring galaxies, the same rotational discrepancies still exist.  Since these discrepancies were used to infer the existence of dark matter in the first place, I think that this is definitely a strike against the dark matter hypothesis.

One simple way of gathering evidence for my hypothesis that I have thought about would be to  examine a number of different types of galaxies and see if there is a correlation between galaxy type and the percentage of dark matter in each galaxy.  According to the dark matter hypothesis, one would naively expect there to be no difference (although I’m sure dark matter theorists could find a way of squirming around that if a correlation were actually found), while according to MOND theories, there should be the same rotational discrepancies in all galaxies.

Update:  I talked briefly to a professor in the astronomy department, and apparently there are correlations between various galaxy types (spiral, elliptic, dwarf, etc) and the amounts of dark matter present.  He didn’t seem to think much of it.

Posted in Physics | Leave a comment

Force fields, meteorology, and nuclear fusion

I came across a very interesting anecdote involving the accidental creation of a “force field” at a polypropylene factory in 1980 (http://www.esdjournal.com/articles/final/final.htm). I’m inclined to believe it, because it seems like a story which would be unreasonable to make up, ie, sometimes truth is stranger than fiction. You can read the story, but one of the comments below it set me thinking about exactly how such a force field would be created.

I think that the key factor is the fact the polypropylene sheet was moving on the assembly line. Moving electric charges generate magnetic fields, which in turn can create strong forces on various particles. When electrons move through wires, they go less than a millimeter per second. So even a slow-moving assembly line would be moving charges much more quickly. Also, since there was so much electric charge on the polypropylene, the resulting magnetic field underneath it would have caused ionized air particles to start moving in circles (as any charged particle moving through a magnetic field will begin to do), creating a whole bunch of tiny charged vortexes. These vortexes would be held together due to the fact that their induced magnetic fields would have sent any stray particles back into the vortex, and electrostatic repulsion would have prevented them from collapsing. Since charged particles moving in parallel are attracted, these vortexes would have created a sheath of compressed ionized air right underneath the center of the polypropylene sheet, where its magnetic field would be running exactly parallel to the sheet.

Anyway, this led to me reading about vortex theory, and how it is hypothesized that similar charged vortexes can also form during storms inside tornadoes and ball lightning. I also read the chapter on lightning in Feynman’s lecture notes, and surprisingly, it turns out the mechanism for the origin of lightning is still unknown. Lightning is caused by the fact that the bottoms of storm clouds are negatively charged and the tops are positively charged, but no one knows why this would be, although there are several hypotheses. It turns out that meteorological phenomena are quite complicated, partly because of the fact that the potential difference between the surface of the Earth and the top of the atmosphere is 400,000 volts.

The charged vortex theory would supposedly explain the large debris ejection zones near the bottom of tornadoes (the negatively charged particles in the tornado would literally be ripping positively charged particles off the ground – there are other explanations for why this would happen, though). Now here’s a real kicker – if ball lightning is caused by rotating vortexes of plasma, then since plasma is such a great conductor, enough electric charge could be inside it to allow the magnetic fields generated inside these vortexes to wrap the vortexes tightly enough to set off nuclear fusion. Such an explosion, if it occurred in nature, would release enough neutrons to create large telltale concentrations of carbon 14. Interestingly enough, such telltale concentrations were actually found after a large mysterious explosion, which occurred after a prolonged period of unusual geophysical activity. The event: Tunguska.

This is definitely fringe science, but in the absence (or proliferation) of other possible explanations, it’s fun to think about.

Posted in Physics | Leave a comment

An integral from Euler

Integral from Euler

Posted in Calculus | Leave a comment