There is now a relatively long introduction at the top of this blog, due to the present nuclear threat caused by disarmament and arms control propaganda, and the dire need to get the facts out past pro-Russian media influencers or loony mass media which has never cared about nuclear and radiation effects facts, so please scroll down to see blog posts. The text below in blue is hyperlinked (direct to reference source materials, rather than numbered and linked to reference at the end of the page) so you can right-click on it and open in a new tab to see the source. This page is not about opinions, it provides censored out facts that debunk propaganda.

Click here for the key declassified nuclear testing and capability documents compilation (EM-1 related USA research reports and various UK nuclear weapon test reports on blast and radiation), from nukegate.org

We also uploaded an online-viewable version of the full text of the 1982 edition of the UK Goverment's Domestic Nuclear Shelters - Technical Guidance, including secret UK and USA nuclear test report references and extracts proving protection against collateral damage, for credible deterrence (linked here).

https://hbr.org/1995/05/why-the-news-is-not-the-truth: "The news media and the government are entwined in a vicious circle of mutual manipulation, mythmaking, and self-interest. Journalists need crises to dramatize news, and government officials need to appear to be responding to crises. Too often, the crises are not really crises but joint fabrications. The two institutions have become so ensnared in a symbiotic web of lies that the news media are unable to tell the public what is true and the government is unable to govern effectively. That is the thesis advanced by Paul H. Weaver, a former political scientist (at Harvard University), journalist (at Fortune magazine), and corporate communications executive (at Ford Motor Company), in his provocative analysis entitled News and the Culture of Lying: How Journalism Really Works ... The news media and the government have created a charade that serves their own interests but misleads the public. Officials oblige the media’s need for drama by fabricating crises and stage-managing their responses, thereby enhancing their own prestige and power. Journalists dutifully report those fabrications. Both parties know the articles are self-aggrandizing manipulations and fail to inform the public about the more complex but boring issues of government policy and activity. What has emerged, Weaver argues, is a culture of lying."

This blog's url is now "www.nukegate.org" (when this nuclear effects blog began in 2006, "glasstone.blogspot.com" was used since it briefly mentioned the key issue of Glasstone's obfuscating "Effects of Nuclear Weapons", specifically the final 1977 edition, which omitted not just the credible deterrent "use" of nuclear weapons but the key final "Principles of protection" chapter that had been present in all previous editions, and it also ignored the relatively clean neutron bombs which had been developed in the intervening years, as a credible deterrent to the concentrations of force needed for aggressive invasions, such as the 1914 invasion of Belgium and the 1939 invasion of Poland; both of which triggered world wars. Those editors themselves were not subversives, but both had nuclear weapons security clearances which constituted political groupthink censorship control, regarding which designs of nuclear weapons they could discuss and the level of technical data (they include basically zero information on their sources and the "bibliographies" are in most cases not to their classified nuclear testing sources but merely further reading); the 1977 edition had been initially drafted in 1974 solely by EM-1 editor Dolan at SRI International, and was then submitted to Glasstone who made further changes. The persistent and hypocritical Russian World Peace Council's and also hardline arms controllers propaganda tactic - supported by some arms industry loons who have a vested interest in conventional war - has been to try to promote lies on nuclear weapons effects to get rid of credible Western nuclear deterrence of provocations that start war. Naturally, the Russians have now stocked 2000+ tactical neutron weapons of the sort they get the West to disarm.

This means that they can invade territory with relative impunity, since the West won't deter such provocations by flexible response - the aim of Russia is to push the West into a policy of massive retaliation of direct attacks only, and then use smaller provocations instead - and Russia can then use its tactical nuclear weapons to "defend" its newly invaded territories by declaring them to now be part of Mother Russia and under Moscow's nuclear umbrella. Russia has repeatedly made it clear - for decades - that it expects a direct war with NATO to rapidly escalate into nuclear WWIII and it has prepared civil defense shelters and evacuation tactics to enable it. Herman Kahn's public warnings of this date back to his testimony to the June 1959 Congressional Hearings on the Biological and Environmental Effects of Nuclear War, but for decades were deliberately misrepresented by most media outlets. President Kennedy's book "Why England Slept" makes it crystal clear how exactly the same "pacifist" propaganda tactics in the 1930s (that time it was the "gas bomb knockout blow has no defense so disarm, disarm, disarm" lie) caused war, by using fear to slow credible rearmament in the face of state terrorism. By the time democracies finally decided to issue an ultimatum, Hitler had been converted - by pacifist appeasement - from a cautious tester of Western indecision, into an overconfident aggressor who simply ignored last-minute ultimatums.

Glasstone and Dolan's 1977 Effects of Nuclear Weapons (US Government) is written in a highly ambiguous fashion (negating nearly every definite statement with a deliberately obfuscating contrary statement to leave a smokescreen legacy of needless confusion, obscurity and obfuscation), omits nearly all key nuclear test data and provides instead misleading generalizations of data from generally unspecified weapon designs tested over 60 years ago which apply to freefield measurements on unobstructed radial lines in deserts and oceans. It makes ZERO analysis of the overall shielding of radiation and blast by their energy attenuation in modern steel and concrete cities, and even falsely denies such factors in its discussion of blast in cities and in its naive chart for predicting the percentage of burns types as a function of freefield outdoor thermal radiation, totally ignoring skyline shielding geometry (similar effects apply to freefield nuclear radiation exposure, despite vague attempts to dismiss this by non-quantitative talk about some scattered radiation arriving from all angles). It omits the huge variations in effects due to weapon design e.g. cleaner warhead designs and the tactical neutron bomb. It omits quantitative data on EMP as a function of burst yield, height and weapon design.

It omits most of the detailed data collected from Hiroshima and Nagasaki on the casualty rates as a function of type of building or shelter and blast pressure. It fails to analyse overall standardized casualty rates for different kinds of burst (e.g. shallow underground earth penetrators convert radiation and blast energy into ground shock and cratering against hard targets like silos or enemy bunkers). It omits a detailed analysis of blast precursor effects. It omits a detailed analysis of fallout beta and gamma spectra, fractionation, specific activity (determining the visibility of the fallout as a function of radiation hazard, and the mass of material to be removed for effective decontamination), and data which does exist on the effect of crater soil size distribution upon the fused fallout particle size distribution (e.g. tests like Small Boy in 1962 on the very fine particles at Frenchman Flats gave mean fallout particle sizes far bigger than the pre-shot soil, proving that - as for Trinitite - melted small soil particles fuse together in the fireball to produce larger fallout particles, so the pre-shot soil size distribution is irrelevant for fallout analysis).

By generally (with few exceptions) lumping "effects" of all types of bursts together into chapters dedicated to specific effects, it falsely gives the impression that all types of nuclear explosions produce similar effects with merely "quantitative differences". This is untrue because air bursts eliminate fallout casualties entirely, while slight burial (e.g. earth penetrating warheads) eliminates thermal (including fires and dust "climatic nuclear winter" BS), the initial radiation and severe blast effects, while massively increasing ground shock, and the same applies to shallow underwater bursts. So a more objective treatment to credibly deter all aggression MUST emphasise the totally different collateral damage effects, by dedicating chapters to different kinds of burst (high altitude/space bursts, free air bursts, surface bursts, underground bursts, underwater bursts), and would include bomb design implications on these effects in detail. A great deal of previously secret and limited distributed nuclear effects data has been declassified since 1977, and new research has been done. Our objectives in this review are: (a) to ensure that an objective independent analysis of the relevant nuclear weapons effects facts is placed on the record in case the currently, increasingly vicious Cold War 2.0 escalates into some kind of limited "nuclear demonstration" by aggressors to try to end a conventional war by using coercive threats, (b) to ensure the lessons of tactical nuclear weapon design for deterring large scale provocations (like the invasions of Belgium in 1914 and Poland in 1939 which triggered world wars) are re-learned in contrast to Dulles "massive retaliation" (incredible deterrent) nonsense, and finally (c) to provide some push to Western governments to "get real" with our civil defense, to try to make credible our ageing "strategic nuclear deterrent". We have also provided a detailed analysis of recently declassified Russian nuclear warhead design data, shelter data, effects data, tactical nuclear weapons employment manuals, and some suggestions for improving Western thermonuclear warheads to improve deterrence.

ABOVE: "missile gap" propaganda debunked by secret 1970s data; Kennedy relied on US nuclear superiority. Using a flawed analysis of nuclear weapons effects on Hiroshima - based on lying unclassified propaganda reports and ignorant dismissals of civil defense shelters in Russia (again based on Hiroshima propaganda by groves in 1945) - America allowed Russian nuclear superiority in the 1970s. Increasingly, the nuclear deterrent was used by Russia to stop the West from "interfering" with its aggressive invasions and wars, precisely Hitler's 1930s strategy with gas bombing knockout-blow threats used to engineer appeasement. BELOW: H-bomb effects and design secrecy led to tragic mass media delusions, such as the 18 February 1950 Picture Post claim that the H-bomb can devastate Australia (inspiring the Shute novel and movie "On the Beach" and also other radiation scams like "Dr Strangelove" to be used by Russia to stir up anti Western disarmament movement to help Russia win WWIII). Dad was a Civil Defense Corps Instructor in the UK when this was done (the civil defense effectiveness and weapon effects facts on shelters at UK and USA nuclear tests were kept secret and not used to debunk lying political appeasement propaganda tricks in the mass media by sensationalist "journalists" and Russian "sputniks"):

Message to mass-media journalists: please don't indulge in lying "no defence" propaganda as was done by most of the media in previous pre-war crises!

Above: Edward Leader-Williams on the basis for UK civil defence shelters in SECRET 1949 Royal Society's London Symposium on physical effects of atomic weapons, a study that was kept secret by the Attlee Government and subsequent UK governments, instead of being openly published to enhance public knowledge of civil defence effectiveness against nuclear attack. Leader-Williams also produced the vital civil defence report seven years later (published below for the first time on this blog), proving civil defence sheltering and city centre evacuation is effective against 20 megaton thermonuclear weapons. Also published in the same secret symposium, which was introduced by Penney, was Penney's own Hiroshima visit analysis of the percentage volume reduction in overpressure-crushed empty petrol cans, blueprint containers, etc., which gave a blast partition yield of 7 kilotons (or 15.6 kt total yield, if taking the nuclear blast as 45% of total yield, i.e. 7/0.45 = 15.6, as done in later AWRE nuclear weapons test blast data reports). Penney in a 1970 updated paper allowed for blast reduction due to the damage done in the city bursts.

ABOVE: The 1996 Northrop EM-1 (see extracts below showing protection by modern buildings and also simple shelters very close to nuclear tests; note that Northrop's entire set of damage ranges as a function of yield for underground shelters, tunnels, silos are based on two contained deep underground nuclear tests of different yield scaled to surface burst using the assumption of 5% yield ground coupling relative to the underground shots; this 5% equivalence figure appears to be an exaggeration for compact modern warheads, e.g. the paper “Comparison of Surface and Sub-Surface Nuclear Bursts,” from Steven Hatch, Sandia National Laboratories, to Jonathan Medalia, October 30, 2000, shows a 2% equivalence, e.g. Hatch shows that 1 megaton surface burst produces identical ranges to underground targets as a 20 kt burst at >20m depth of burst, whereas Northrop would require 50kt) has not been openly published, despite such protection being used in Russia! This proves heavy bias against credible tactical nuclear deterrence of the invasions that trigger major wars that could escalate into nuclear war (Russia has 2000+ dedicated neutron bombs; we don't!) and against simple nuclear proof tested civil defence which makes such deterrence credible and of course is also of validity against conventional wars, severe weather, peacetime disasters, etc.

The basic fact is that nuclear weapons can deter/stop invasions unlike the conventional weapons that cause mass destruction, and nuclear collateral damage is eliminated easily for nuclear weapons by using them on military targets, since for high yields at collateral damage distances all the effects are sufficiently delayed in arrival to allow duck and cover to avoid radiation and blast wind/flying debris injuries (unlike the case for the smaller areas affected by smaller yield conventional weapons, where there is little time on seeing the flash to duck and cover to avoid injury), and as the original 1951 SECRET American Government "Handbook on Capabilities of Atomic Weapons" (limited report AD511880L, forerunner to today's still secret EM-1) stated in Section 10.32:

"PERHAPS THE MOST IMPORTANT ITEM TO BE REMEMBERED WHEN ESTIMATING EFFECTS ON PERSONNEL IS THE AMOUNT OF COVER ACTUALLY INVOLVED. ... IT IS OBVIOUS THAT ONLY A FEW SECONDS WARNING IS NECESSARY UNDER MOST CONDITIONS TO TAKE FAIRLY EFFECTIVE COVER. THE LARGE NUMBER OF CASUALTIES IN JAPAN RESULTED FOR THE MOST PART FROM THE LACK OF WARNING."

As for Hitler's stockpile of 12,000 tons of tabun nerve gas, whose strategic and also tactical use was deterred by proper defences (gas masks for all civilians and soldiers, as well as UK stockpiles of fully trial-tested deliverable biological agent anthrax and mustard gas retaliation capacity), it is possible to deter strategic nuclear escalation to city bombing, even within a world war with a crazy terrorist, if all the people are protected by both defence and deterrence.

J. R. Oppenheimer (opposing Teller), February 1951: "It is clear that they can be used only as adjuncts in a military campaign which has some other components, and whose purpose is a military victory. They are not primarily weapons of totality or terror, but weapons used to give combat forces help they would otherwise lack. They are an integral part of military operations. Only when the atomic bomb is recognized as useful insofar as it is an integral part of military operations, will it really be of much help in the fighting of a war, rather than in warning all mankind to avert it." (Quotation: Samuel Cohen, Shame, 2nd ed., 2005, page 99.)

‘The Hungarian revolution of October and November 1956 demonstrated the difficulty faced even by a vastly superior army in attempting to dominate hostile territory. The [Soviet Union] Red Army finally had to concentrate twenty-two divisions in order to crush a practically unarmed population. ... With proper tactics, nuclear war need not be as destructive as it appears when we think of [World War II nuclear city bombing like Hiroshima]. The high casualty estimates for nuclear war are based on the assumption that the most suitable targets are those of conventional warfare: cities to interdict communications ... With cities no longer serving as key elements in the communications system of the military forces, the risks of initiating city bombing may outweigh the gains which can be achieved. ...

‘The elimination of area targets will place an upper limit on the size of weapons it will be profitable to use. Since fall-out becomes a serious problem [i.e. fallout contaminated areas which are so large that thousands of people would need to evacuate or shelter indoors for up to two weeks] only in the range of explosive power of 500 kilotons and above, it could be proposed that no weapon larger than 500 kilotons will be employed unless the enemy uses it first. Concurrently, the United States could take advantage of a new development which significantly reduces fall-out by eliminating the last stage of the fission-fusion-fission process.’

- Dr Henry Kissinger, Nuclear Weapons and Foreign Policy, Harper, New York, 1957, pp. 180-3, 228-9. (Note that sometimes the "nuclear taboo" issue is raised against this analysis by Kissenger: if anti-nuclear lying propaganda on weapons effects makes it apparently taboo in the Western pro-Russian disarmament lobbies to escalate from conventional to tactical nuclear weapons to end war as on 6 and 9 August 1945, then this "nuclear taboo" can be relied upon to guarantee peace for our time. However, this was not only disproved by Hiroshima and Nagasaki, but by the Russian tactical nuclear weapons reliance today, the Russian civil defense shelter system detailed on this blog which showed they believed a nuclear war survivable based on the results of their own nuclear tests, and the use of Russian nuclear weapons years after Kissinger's analysis was published and criticised, for example their 50 megaton test in 1961 and their supply of IRBM's capable of reaching East Coast mainland USA targets to the fanatical Cuban dictatorship in 1962. So much for the "nuclear taboo" as being any more reliable than Chamberlain's "peace for our time" document, co-signed by Hitler on 30 September 1938! We furthermore saw how Russia respected President Obama's "red line" for the "chemical weapons taboo": Russia didn't give a toss about Western disarmament thugs prattle about what they think is a "taboo", Russia used chlorine and sarin in Syria to keep Assad the dictator and they used Novichok to attack and kill in the UK in 2018, with only diplomatic expulsions in response. "Taboos" are no more valid to restrain madmen than peace treaties, disarmament agreements, Western CND books attacking civil defense or claiming that nuclear war is the new 1930s gas war bogyman, or "secret" stamps on scientific facts. In a word, they're crazy superstitions.)

(Quoted in 2006 on this blog here.)

All of this data should have been published to inform public debate on the basis for credible nuclear deterrence of war and civil defense, PREVENTING MILLIONS OF DEATHS SINCE WWII, instead of DELIBERATELY allowing enemy anti-nuclear and anti-civil defence lying propaganda from Russian supporting evil fascists to fill the public data vacuum, killing millions by allowing civil defence and war deterrence to be dismissed by ignorant "politicians" in the West, so that wars triggered by invasions with mass civilian casualties continue today for no purpose other than to promote terrorist agendas of hate and evil arrogance and lying for war, falsely labelled "arms control and disarmament for peace":

"Controlling escalation is really an exercise in deterrence, which means providing effective disincentives to unwanted enemy actions. Contrary to widely endorsed opinion, the use or threat of nuclear weapons in tactical operations seems at least as likely to check [as Hiroshima and Nagasaki] as to promote the expansion of hostilities [providing we're not in a situation of Russian biased arms control and disarmament whereby we've no tactical weapons while the enemy has over 2000 neutron bombs thanks to "peace" propaganda from Russian thugs]." - Bernard Brodie, pvi of Escalation and the nuclear option, RAND Corp memo RM-5444-PR, June 1965.

ABOVE: Example of a possible Russian 1985 1st Cold War SLBM first strike plan. The initial use of Russian SLBM launched nuclear missiles from off-coast against command and control centres (i.e. nuclear explosions to destroy warning satellite communications centres by radiation on satellites as well as EMP against ground targets, rather than missiles launched from Russia against cities, as assumed by 100% of the Cold War left-wing propaganda) is allegedly a Russian "fog of war" strategy. Such a "demonstration strike" is aimed essentially at causing confusion about what is going on, who is responsible - it is not quick or easy to finger-print high altitude bursts fired by SLBM's from submerged submarines to a particular country because you don't get fallout samples to identify isotopic plutonium composition. Russia could immediately deny the attack (implying, probably to the applause of the left-wingers that this was some kind of American training exercise or computer based nuclear weapons "accident", similar to those depicted in numerous anti-nuclear Cold War propaganda films). Thinly-veiled ultimatums and blackmail follow. America would not lose its population or even key cities in such a first strike (contrary to left-wing propaganda fiction), as with Pearl Harbor in 1941; it would lose its complacency and its sense of security through isolationism, and would either be forced into a humiliating defeat or a major war.

Before 1941, many warned of the risks but were dismissed on the basis that Japan was a smaller country with a smaller economy than the USA and war was therefore absurd (similar to the way Churchill's warnings about European dictators were dismissed by "arms-race opposing pacifists" not only in the 1930s, but even before WWI; for example Professor Cyril Joad documents in the 1939 book "Why War?" his first hand witnessing of Winston Churchill's pre-WWI warning and call for an arms-race to deter that war, as dismissed by the sneering Norman Angell who claimed an arms race would cause a war rather than avert one by bankrupting the terrorist state). It is vital to note that there is an immense pressure against warnings of Russian nuclear superiority even today, most of it contradictory. E.g. the left wing and Russian-biased "experts" whose voices are the only ones reported in the Western media (traditionally led by "Scientific American" and "Bulletin of the Atomic Scientists"), simultaneously claim Russia imposes such a terrible SLBM and ICBM nuclear threat that we must desperately disarm now, while also claiming that Russian tactical nuclear weapons probably won't work so aren't a threat that needs to be credibly deterred! This only makes sense as Russian siding propaganda. In similar vein, Teller-critic Hans Bethe also used to falsely "dismiss" Russian nuclear superiority by claiming (with quotes from Brezhnev about the peaceful intentions of Russia) that Russian delivery systems are "less accurate" than Western missiles (as if accuracy has anything to do with high altitude EMP strikes, where the effects cover huge areas, or large city targets. Such claims would then by repeatedly endlessly in the Western media by Russian biased "journalists" or agents of influence, and any attempt to point out the propaganda (i.e. he real world asymmetry: Russia uses cheap countervalue targetting on folk that don't have civil defense, whereas we need costly, accurate counterforce targetting because Russia has civil defense shelters that we don't have) became a "Reds under beds" argument, implying that the truth is dangerous to "peaceful coexistence"!

“Free peoples ... will make war only when driven to it by tyrants. ... there have been no wars between well-established democracies. ... the probability ... that the absence of wars between well-established democracies is a mere accident [is] less than one chance in a thousand. ... there have been more than enough to provide robust statistics ... When toleration of dissent has persisted for three years, but not until then, we can call a new republic ‘well established.’ ... Time and again we observe authoritarian leaders ... using coercion rather than seeking mutual accommodation ... Republican behaviour ... in quite a few cases ... created an ‘appeasement trap.’ The republic tried to accommodate a tyrant as if he were a fellow republican; the tyrant concluded that he could safely make an aggressive response; eventually the republic replied furiously with war. The frequency of such errors on both sides is evidence that negotiating styles are not based strictly on sound reasoning.” - Spencer Weart, Never at War: Why Democracies Will Not Fight One Another (Yale University Press)

The Top Secret American intelligency report NIE 11-3/8-74 "Soviet Forces for Intercontinental Conflict" warned on page 6: "the USSR has largely eliminated previous US quantitative advantages in strategic offensive forces." page 9 of the report estimated that the Russian's ICBM and SLBM launchers exceed the USAs 1,700 during 1970, while Russia's on-line missile throw weight had exceeded the USA's one thousand tons back in 1967! Because the USA had more long-range bombers which can carry high-yield bombs than Russia (bombers are more vulnerable to air defences so were not Russia's priority), it took a little longer for Russia to exceed the USA in equivalent megatons, but the 1976 Top Secret American report NIE 11-3/8-76 at page 17 shows that in 1974 Russia exceeded the 4,000 equivalent-megatons payload of USA missiles and aircraft (with less vulnerability for Russia, since most of Russia's nuclear weapons were on missiles not in SAM-vulnerable aircraft), amd by 1976 Russia could deliver 7,000 tons of payload by missiles compared to just 4,000 tons on the USA side. These reports were kept secret for decades to protect the intelligence sources, but they were based on hard evidence. For example, in August 1974 the Hughes Aircraft Company used a specially designed ship (Glomar Explorer, 618 feet long, developed under a secret CIA contract) to recover nuclear weapons and their secret manuals from a Russian submarine which sank in 16,000 feet of water, while in 1976 America was able to take apart the electronics systems in a state-of-the-art Russian MIG-25 fighter which was flown to Japan by defector Viktor Belenko, discovering that it used exclusively EMP-hard miniature vacuum tubes with no EMP-vulnerable solid state components.

There are four ways of dealing with aggressors: conquest (fight them), intimidation (deter them), fortification (shelter against their attacks; historically used as castles, walled cities and even walled countries in the case of China's 1,100 mile long Great Wall and Hadrian's Wall, while the USA has used the Pacific and Atlantic as successful moats against invasion, at least since Britain invaded Washington D.C. back in 1812), and friendship (which if you are too weak to fight, means appeasing them, as Chamberlain shook hands with Hitler for worthless peace promises). These are not mutually exclusive: you can use combinations. If you are very strong in offensive capability and also have walls to protect you while your back is turned, you can - as Teddy Roosevelt put it (quoting a West African proverb): "Speak softly and carry a big stick." But if you are weak, speaking softly makes you a target, vulnerable to coercion. This is why we don't send troops directly to Ukraine. When elected in 1960, Kennedy introduced "flexible response" to replace Dulles' "massive retaliation", by addressing the need to deter large provocations without being forced to decide between the unwelcome options of "surrender or all-out nuclear war" (Herman Kahn called this flexible response "Type 2 Deterrence"). This was eroded by both Russian civil defense and their emerging superiority in the 1970s: a real missiles and bombers gap emerged in 1972 when the USSR reached and then exceeded the 2,200 of the USA, while in 1974 the USSR achieve parity at 3,500 equivalent megatons (then exceeded the USA), and finally today Russia has over 2,000 dedicated clean enhanced neutron tactical nuclear weapons and we have none (except low-neutron output B61 multipurpose bombs). (Robert Jastrow's 1985 book How to make nuclear Weapons obsolete was the first to have graphs showing the downward trend in nuclear weapon yields created by the development of miniaturized MIRV warheads for missiles and tactical weapons: he shows that the average size of US warheads fell from 3 megatons in 1960 to 200 kilotons in 1980, and from a total of 12,000 megatons in 1960 to 3,000 megatons in 1980.)

The term "equivalent megatons" roughly takes account of the fact that the areas of cratering, blast and radiation damage scale not linearly with energy but as something like the 2/3 power of energy release; but note that close-in cratering scales as a significantly smaller power of energy than 2/3, while blast wind drag displacement of jeeps in open desert scales as a larger power of energy than 2/3. Comparisons of equivalent megatonnage shows, for example, that WWII's 2 megatons of TNT in the form of about 20,000,000 separate conventional 100 kg (0.1 ton) explosives is equivalent to 20,000,000 x (10-7)2/3 = 431 separate 1 megaton explosions! The point is, nuclear weapons are not of a different order of magnitude to conventional warfare, because: (1) devastated areas don't scale in proportion to energy release, (2) the number of nuclear weapons is very much smaller than the number of conventional bombs dropped in conventional war, (3) because of radiation effects like neutrons and intense EMP, it is possible to eliminate physical destruction by nuclear weapons by a combination of weapon design (e.g. very clean bombs like 99.9% fusion Dominic-Housatonic, or 95% fusion Redwing-Navajo) and burst altitude or depth for hard targets, and create a weapon that deters invasions credibly (without lying local fallout radiation hazards), something none of the biased "pacifist disarmament" lobbies (which attract Russian support) tell you, and (4) people at collateral damage distances have time to take cover from radiation and flying glass, blast winds, etc from nuclear explosions (which they don't in Ukraine and Gaza where similar blast pressures arrive more rapidly from smaller conventional explosions). There's a big problem with propaganda here.

(These calculations, showing that even if strategic bombing had worked in WWII - and the US Strategic Bombing Survey concluded it failed, thus the early Cold War effort to develop and test tactical nuclear weapons and train for tactical nuclear war in Nevada field exercises - you need over 400 megaton weapons to give the equivalent of WWII city destruction in Europe and Japan, are often inverted by anti-nuclear bigots to try to obfuscate the truth. What we're driving at is that nuclear weapons give you the ability to DETER the invasions that set off such wars, regardless of whether they escalate from poison gas - as feared in the 20s and 30s thus appeasement and WWII - or nuclear. Escalation was debunked in WWII where the only use of poison gases were in "peaceful" gas chambers, not dropped on cities. Rather than justifying appeasement, the "peaceful" massacre of millions in gas chambers justified war. But evil could and should have been deterred. The "anti-war" propagandarists like Lord Noel-Baker and pals who guaranteed immediate gas knockout blows in the 30s if we didn't appease evil dictators were never held to account and properly debunked by historians after the war, so they converted from gas liars to nuclear liars in the Cold War and went on winning "peace" prices for their lies, which multiplied up over the years, to keep getting news media headlines and Nobel Peace Prizes for starting and sustaining unnecessary wars and massacres by dictators. There's also a military side to this, with Field Marshall's Lord Mountbatten, lord Carver and lord Zuckerman in the 70s arguing for UK nuclear disarmament and a re-introduction of conscription instead. These guys were not pacifist CND thugs who wanted Moscow to rule the world, but they were quoted by them attacking the deterrent but not of course calling for conscription instead. The abolishment of UK conscription for national service in 1960 was due to the H-bomb, and was a political money-saving plot by Macmillan. If we disarmed our nuclear deterrent and spend the money on conscription plus underground shelters, we might well be able to resist Russia as Ukraine does, until we run out of ammunition etc. However, the cheapest and most credible deterrent is tactical nuclear weapons to prevent the concentration of aggressive force by terrorist states..)

Britain was initially in a better position with regards to civil defense than the USA, because in WWII Britain had built sufficient shelters (of various types, but all tested against blast intense enough to demolish brick houses, and later also tested them at various nuclear weapon trials in Monte Bello and Maralinga, Australia) and respirators for the entire civilian population. However, Britain also tried to keep the proof testing data secret from Russia (which tested their own shelters at their own nuclear tests anyway) and this meant it appeared that civil defense advice was unproved and would not work, an illusion exploited especially for communist propaganda in the UK via CND. To give just one example, CND and most of the UK media still rely on Duncan Campbell's pseudo-journalism book War Plan UK since it is based entirely on fake news about UK civil defense, nuclear weapons, Hiroshima, fallout, blast, etc. He takes for granted that - just because the UK Government kept the facts secret - the facts don't exist, and to him any use of nuclear weapons which spread any radioactivity whatsoever will make life totally impossible: "What matters 'freedom' or 'a way of life' in a radioactive wasteland?" (Quote from D. Campbell, War Plan UK, Paladin Books, May 1983, p387.) The problem here is the well known fallout decay rate; Trinity nuclear test ground zero was reported by Glasstone (Effects of Atomic Weapons, 1950) to be at 8,000 R/hr at 1 hour after burst, yet just 57 days later, on September 11, 1945, General Groves, Robert Oppenheimer, and a large group of journalists safely visited it and took their time inspecting the surviving tower legs, when the gamma dose rate was down to little more than 1 R/hr! So fission products decay fast: 1,000 R/hr at 1 hour decays to 100 at 7 hours, 10 at 2 days, and just 1 at 2 weeks. So the "radioactive wasteland" is just as much a myth as any other nuclear "doomsday" fictional headline in the media. Nuclear weapons effects have always been fake news in the mainstream media: editors have always regarded facts as "boring copy". Higher yield tests showed that even the ground zero crater "hot spots" were generally lower, due to dispersal by the larger mushroom cloud. If you're far downwind, you can simply walk cross-wind, or prepare an improvised shelter while the dust is blowing. But point any such errors out to fanatical bigots and they will just keep making up more nonsense.

Duncan Campbell's War Plan UK relies on the contradiction of claiming that the deliberately exaggerated UK Government worst-case civil defense "exercises" for training purposes are "realistic scenarios" (e.g. 1975 Inside Right, 1978 Scrum Half, 1980 Square Leg, 1982 Hard Rock planning), while simultaneously claiming the very opposite about reliable UK Government nuclear effects and sheltering effectiveness data, and hoping nobody would spot his contradictory tactics. He quotes extensively from these lurid worst-case scenario UK civil defense exercises ,as if they are factually defensible rather than imaginary fiction to put planners under the maximum possible stress (standard UK military policy of “Train hard to fight easy”), while ignoring the far more likely limited nuclear uses scenario of Sir John Hackett's Third World War. His real worry is the 1977 UK Government Training Manual for Scientific Advisers which War Plan UK quotes on p14: "a potential threat to the security of the United Kingdom arising from acts of sabotage by enemy agents, possibly assisted by dissident groups. ... Their aim would be to weaken the national will and ability to fight. ... Their significance should not be underestimated." On the next page, War Plan UK quotes J. B. S. Haldane's 1938 book Air Raid Precautions (ARP) on the terrible destruction Haldane witnessed on unprotected people in the Spanish civil war, without even mentioning that Haldane's point is pro-civil defense, pro-shelters, and anti-appeasement of dictatorship, the exact opposite of War Plan UK which wants Russia to run the world. On page 124 War Plan UK the false assertion is made that USA nuclear casualty data is "widely accepted" and true (declassified Hiroshima casaulty data for people in modern concrete buildings proves it to be lies) while the correct UK nuclear casualty data is "inaccurate", and on page 126, Duncan Campbell simply lies that the UK Government's Domestic Nuclear Shelters- Technical Guidance "ended up offering the public a selection of shelters half of which were invented in the Blitz ... None of the designs was ever tested." In fact, Frank Pavry (who studied similar shelters surviving near ground zero at Hiroshima and Nagasaki in 1945 with the British Mission to Japan_ and George R. Stanbury tested 15 Anderson shelters at the first UK nuclear explosion, Operation Hurricane in 1952, together with concrete structures, and many other improvised trench and earth-covered shelters were nuclear tested by USA and UK at trials in 1955, 1956, 1957, and 1958, and later at simulated nuclear explosions by Cresson Kearny of Oak Ridge National Laboratory in the USA, having also earlier been exposed to early Russian nuclear tests (scroll down to see the evidence of this). Improved versions of war tested and nuclear weapons tested shelters! So war Plan UK makes no effort whatsoever to dig up the facts, and instead falsely claims the exact opposite of the plain unvarnished truth! War Plan UK shows its hypocrisy on page 383 in enthusiastically praising Russian civil defense:

"Training in elementary civil defence is given to everyone, at school, in industry or collective farms. A basic handbook of precautionary measures, Everybody must know this!, is the Russian Protect and Survive. The national civil defence corps is extensive, and is organized along military lines. Over 200,000 civil defence troops would be mobilized for rescue work in war. There are said to be extensive, dispersed and 'untouchable' food stockpiles; industrial workers are issued with kits of personal protection apparatus, said to include nerve gas counteragents such as atropine. Fallout and blast shelters are provided in the cities and in industrial complexes, and new buildings have been required to have shelters since the 1950s. ... They suggest that less than 10% - even as little as 5% - of the Soviet population would die in a major attack. [Less than Russia's loss of 12% of its population in WWII.]"

'LLNL achieved fusion ignition for the first time on Dec. 5, 2022. The second time came on July 30, 2023, when in a controlled fusion experiment, the NIF laser delivered 2.05 MJ of energy to the target, resulting in 3.88 MJ of fusion energy output, the highest yield achieved to date. On Oct. 8, 2023, the NIF laser achieved fusion ignition for the third time with 1.9 MJ of laser energy resulting in 2.4 MJ of fusion energy yield. “We’re on a steep performance curve,” said Jean-Michel Di Nicola, co-program director for the NIF and Photon Science’s Laser Science and Systems Engineering organization. “Increasing laser energy can give us more margin against issues like imperfections in the fuel capsule or asymmetry in the fuel hot spot. Higher laser energy can help achieve a more stable implosion, resulting in higher yields.” ... “The laser itself is capable of higher energy without fundamental changes to the laser,” said NIF operations manager Bruno Van Wonterghem. “It’s all about the control of the damage. Too much energy without proper protection, and your optics blow to pieces.” ' - https://lasers.llnl.gov/news/llnls-nif-delivers-record-laser-energy

NOTE: the "problem" very large lasers "required" to deliver ~2MJ (roughly 0.5 kg of TNT energy) to cause larger fusion explosions of 2mm diameter capsules of frozen D+T inside a 1 cm diameter energy reflecting hohlraum, and the "problem" of damage to the equipment caused by the explosions, is immaterial to clean nuclear deterrent development based on this technology, because in a clean nuclear weapon, whatever laser or other power ignition system is used only has to be fired once, so it needs to be less robust than the NIF lasers which are used repeatedly. Similarly, damage done to the system by the explosion is also immaterial for a clean nuclear weapon, in which the weapon is detonated once only! This is exactly the same point which finally occurred during a critical review of the first gun-type assembly nuclear weapon, in which the fact it would only ever be fired once (unlike a field artillery gun) enabled huge reductions in the size of the device, into a practical weapon, as described by General Leslie M. Groves on p163 of his 1962 book Now it can be told: the story of the Manhattan Project:

"Out of the Review Committee's work came one important technical contribution when Rose pointed out ... that the durability of the gun was quite immaterial to success, since it would be destroyed in the explosion anyway. Self-evident as this seemed once it was mentioned, it had not previously occurred to us. Now we could make drastic reductions in ... weight and size."

This principle also applies to weaponizing NIF clean fusion explosion technology. General Groves' book was reprinted in 1982 with a useful Introduction by Edward Teller on the nature of nuclear weapons history: "History in some ways resembles the relativity principle in science. What is observed depends on the observer. Only when the perspective of the observer is known, can proper corrections be made. ... The general ... very often managed to ignore complexity and arrive at a result which, if not ideal, at least worked. ... For Groves, the Manhattan project seemed a minor assignment, less significant than the construction of the Pentagon. He was deeply disappointed at being given the job of supervising the development of an atomic weapon, since it deprived him of combat duty. ... We must find ways to encourage mutual understanding and significant collaboration between those who defend their nation with their lives and those who can contribute the ideas to make that defense successful. Only by such cooperation can we hope that freedom will survive, that peace will be preserved."

General Groves similarly comments in Chapter 31, "A Final Word" of Now it can be told:

"No man can say what would have been the result if we had not taken the steps ... Yet, one thing seems certain - atomic energy would have been developed somewhere in the world ... I do not believe the United States ever would have undertaken it in time of peace. Most probably, the first developer would have been a power-hungry nation, which would then have dominated the world completely ... it is fortunate indeed for humanity that the initiative in this field was gained and kept by the United States. That we were successful was due entirely to the hard work and dedication of the more than 600,000 Americans who comprised and directly supported the Manhattan Project. ... we had the full backing of our government, combined with the nearly infinite potential of American science, engineering and industry, and an almost unlimited supply of people endowed with ingenuity and determination."

Update: Lawrence Livermore National Laboratory's $3.5 billion National Ignition Facility, NIF, using ultraviolet wavelength laser beam pulses of 2MJ on to a 2mm diameter spherical beryllium shell of frozen D+T inside a 1 cm-long hollow gold cylinder "hohlraum" (which is heated to a temperature where it then re-radiates energy at much higher frequency, x-rays, on to the surface of the beryllium ablator of the central fusion capsule, which ablates causing it to recoil inward (as for the 1962 Ripple II nuclear weapon's secondary stage, the capsule is compressed efficiently, mimicking the isentropic compression mechanism of a miniature Ripple II clean nuclear weapon secondary stage), has now repeatedly achieved nuclear fusion explosions of over 3MJ, equivalent to nearly 1 kg of TNT explosive. According to a Time article (linked her) about fusion system designer Annie Kritcher, the recent breakthrough was in part due to using a ramping input energy waveform: "success that came thanks to tweaks including shifting more of the input energy to the later part of the laser shot", a feature that minimises the rise in entropy due to shock shock wave generation (which heats the capsule, causing it to expand and resist compression) and increases isentropic compression which was the principle used by LLNL's J. H. Nuckolls to achieve the 99.9% clean Ripple II 9.96 megaton nuclear test success in Dominic-Housatonic on 30 October 1962. Nuckolls in 1972 published the equation for the idealized input power waveform required for isentropic, optimized compression of fusion fuel (Nature, v239, p139): P ~ (1 - t)-1.875, where t is time in units of the transit time (the time taken for the shock to travel to the centre of the fusion capsule), and -1.875 a constant based on the specific heat of the ionized fuel (Nuckolls has provided the basic declassified principles, see extract linked here). To be clear, the energy reliably released by the 2mm diameter capsule of fusion fuel was roughly a 1 kg TNT explosion. 80% of this is in the form of 14.1 MeV neutrons (ideal for fissioning lithium-7 in LiD to yield more tritium), and 20% is the kinetic energy of fused nuclei (which is quickly converted into x-rays radiation energy by collisions). Nuckolls' 9.96 megaton Housatonic (10 kt Kinglet primary and 9.95 Mt Ripple II 100% clean isentropically compressed secondary) of 1962 proved that it is possible to use multiplicative staging whereby lower yield primary nuclear explosions trigger off a fusion stage 1,000 times more powerful than its initiator. Another key factor, as shown on our ggraph linked here, is that you can use cheap natural LiD as fuel once you have a successful D+T reaction, because naturally abundant, cheap Li-7 more readily fissions to yield tritium with the 14.1 MeV neutrons from D+T fusion, than expensively enriched Li-6, which is needed to make tritium in nuclear reactors where the fission neutron energy of around 1 MeV is too low to to fission Li-7. It should also be noted that despite an openly published paper about Nuckolls' Ripple II success being stymied in 2021 by Jon Grams, the subject is still being covered up/ignored by the anti-nuclear biased Western media! Grams article fails to contain the design details such as the isentropic power delivery curve etc from Nuckolls' declassified articles that we include in the latest blog post here. One problem regarding "data" causing continuing confusion about the Dominic-Housatonic 30 October 1962 Ripple II test at Christmas Island, is made clear in the DASA-1211 report's declassified summary of the sizes, weights and yields of those tests: Housatonic was Nuckolls' fourth and final isentropic test, with the nuclear system inserted into a heavy steel Mk36 drop case, making the overall size 57.2 inches in diameter, 147.9 long and 7,139.55 lb mass, i.e. 1.4 kt/lb or 3.0 kt/kg yield-to-mass ratio for 9.96 Mt yield, which is not impressive for that yield range until you consider (a) that it was 99.9% fusion and (b) the isentropic design required a heavy holhraum around the large Ripple II fusion secondary stage to confine x-rays for relatively long time during which a slowly rising pulse of x-rays were delivered from the primary to secondary via a very large areas of foam elsewhere in the weapon, to produce isentropic compression.

Additionally, the test was made in a hurry before an atmospheric teat ban treaty, and this rushed use of a standard air drop steel casing made the tested weapon much heavier than a properly weaponized Ripple II. The key point is that a 10 kt fission device set off a ~10 Mt fusion explosion, a very clean deterrent. Applying this Ripple II 1,000-factor multiplicative staging figure directly to this technology for clean nuclear warheads, a 0.5 kg TNT D+T fusion capsule would set off a 0.5 ton TNT 2nd stage of LiD, which would then set off a 0.5 kt 3rd stage "neutron bomb", which could then be used to set off a 500 kt 4th stage or "strategic nuclear weapon". In practice, this multiplication factor of 1,000 given by Ripple II in 1962 from 10 kt to 10 Mt may not be immediately achievable to get from ~1 kg TNT yield to 1 ton TNT, so a few more tiny stages may be needed for the lower yield. But there is every reason to forecast that with enough research, improvements will be possible and the device will become a reality. It is therefore now possible not just in "theory" or in principle, but with evidence obtained from practical experimentation, using suitable already-proved technical staging systems used in 1960s nuclear weapon tests successfully, to design 100% clean fusion nuclear warheads! Yes, the details have been worked out, yes the technology has been tested in piecemeal fashion. All that is now needed is a new, but quicker and cheaper, Star Wars program or Manhattan Project style effort to pull the components together. This will constitute a major leap forward in the credibility of the deterrence of aggressors.

ABOVE: as predicted, the higher the input laser pulse for the D+T initiator of a clean multiplicatively-staged nuclear deterrent, the lower the effect of plasma instabilities and asymmetries and the greater the fusion burn. To get ignition (where the x-ray energy injected into the fusion hohlraum by the laser is less than the energy released in the D+T fusion burn) they have had to use about 2 MJ delivered in 10 ns or so, equivalent to 0.5 kg of TNT equivalent. But for deterrent use, why use such expensive, delicate lasers? Why not just use one-shot miniaturised x-ray tubes with megavolt electron acceleration, powered a suitably ramped pulse from a chemical explosion for magnetic flux compression current generation? At 10% efficiency, you need 0.5 x 10 = 5 kg of TNT! Even at 1% efficiency, 50 kg of TNT will do. Once the D+T gas capsule's hohlraum is well over 1 cm in size, to minimise the risk of imperfections that cause asymmetries, you don't any longer need focussed laser beams to enter tiny apertures. You might even be able to integrate many miniature flash x-ray tubes (each designed to burn out when firing one pulse of a MJ or so) into a special hohlraum. Humanity urgently needs a technological arms race akin to Reagan's Star Wars project, to deter the dictators from invasions and WWIII. In the conference video above, a question was asked about the real efficiency of the enormous repeat-pulse capable laser system's efficiency (not required for a nuclear weapon whose components only require the capability to be used once, unlike lab equipment): the answer is that 300 MJ was required by the lab lasers to fire a 2 MJ pulse into the D+T capsule's x-ray hohlraum, i.e. their lasers are only 0.7% efficient! So why bother? We know - from the practical use of incoherent fission primary stage x-rays to compress and ignite fusion capsules in nuclear weapons - that you simply don't need coherent photons from a laser for this purpose. The sole reason they are approaching the problem with lasers is that they began their lab experiments decades ago with microscopic sized fusion capsules and for those you need a tightly focussed beam to insert energy through a tiny hohlraum aperture. But now they are finally achieving success with much larger fusion capsules (to minimise instabilities that caused the early failures), it may be time to change direction. A whole array of false "no-go theorems" can and will be raised by ignorant charlatan "authorities" against any innovation; this is the nature of the political world. There is some interesting discussion of why clean bombs aren't in existence today, basically the idealized theory (which works fine for big H-bombs but ignores small-scale asymmetry problems which are important only at low ignition energy) understimated the input energy required for fusion ignition by a factor of 2000:

"The early calculations on ICF (inertial-confinement fusion) by John Nuckolls in 1972 had estimated that ICF might be achieved with a driver energy as low as 1 kJ. ... In order to provide reliable experimental data on the minimum energy required for ignition, a series of secret experiments—known as Halite at Livermore and Centurion at Los Alamos—was carried out at the nuclear weapons test site in Nevada between 1978 and 1988. The experiments used small underground nuclear explosions to provide X-rays of sufficiently high intensity to implode ICF capsules, simulating the manner in which they would be compressed in a hohlraum. ... the Halite/Centurion results predicted values for the required laser energy in the range 20 to 100MJ—higher than the predictions ..." - Garry McCracken and Peter Stott, Fusion, Elsevier, 2nd ed., p149.

In the final diagram above, we illustrate an example of what could very well occur in the near future, just to really poke a stick into the wheels of "orthodoxy" in nuclear weapons design: is it possible to just use a lot of (perhaps hardened for higher currents, perhaps no) pulsed current driven microwave tubes from kitchen microwave ovens, channelling their energy using waveguides (simply metal tubes, i.e. electrical Faraday cages, which reflect and thus contain microwaves) into the hohlraum, and make the pusher of dipole molecules (like common salt, NaCl) which is a good absorber of microwaves (as everybody knows from cooking in microwave ovens)? It would be extremely dangerous, not to mention embarrassing, if this worked, but nobody had done any detailed research into the possibility due to groupthink orthodoxy and conventional boxed in thinking! Remember, the D+T capsule just needs extreme compression and this can be done by any means that works. Microwave technology is now very well-established. It's no good trying to keep anything of this sort "secret" (either officially or unofficially) since as history shows, dictatorships are the places where "crackpot"-sounding ideas (such as douple-primary Project "49" Russian thermonuclear weapon designs, Russian Sputnik satellites, Russian Novichok nerve agent, Nazi V1 cruise missiles, Nazi V2 IRBM's, etc.) can be given priority by loony dictators. We have to avoid, as Edward Teller put it (in his secret commentary debunking Bethe's false history of the H-bomb, written AFTER the Teller-Ulam breakthrough), "too-narrow" thinking (which Teller said was still in force on H-bomb design even then). Fashionable hardened orthodoxy is the soft underbelly of "democracy" (a dictatorship by the majority, which is always too focussed on fashionable ideas and dismissive of alternative approaches in science and technology). Dictatorships (minorities against majorities) have repeatedly demonstrated a lack of concern for the fake "no-go theorems" used by Western anti-nuclear "authorities" to ban anything but fashionable groupthink science.

ABOVE: 1944-dated film of the Head of the British Mission to Los Alamos, neutron discoverer James Chadwick, explaining in detail to American how hard it was for him to discover the neutron, taking 10 years on a shoe-string budget, mostly due to having insufficiently strong sources of alpha particles to bombard nuclei in a cloud chamber! The idea of the neutron came from his colleague Rutherford. Chadwick reads his explanation while rapidly rotating a pencil in his right hand, perhaps indicating the stress he was under in 1944. In 1946, when British participation at Los Alamos ended, Chadwick wrote the first detailed secret British report on the design of a three-stage hydrogen bomb, another project that took over a decade. In the diagram below, it appears that the American Mk17 only had a single secondary stage like the similar yield 1952 Mike design. The point here is that popular misunderstanding of the simple mechanism of x-ray energy transfer for higher yield weapons may be creating a dogmatic attitude even in secret nuclear weaponeer design labs, where orthodoxy is followed too rigorously. The Russians (see quotes on the latest blog post here) state they used two entire two-stage thermonuclear weapons with a combined yield of 1 megaton to set off their 50 megaton test in 1961. If true, you can indeed use two-stage hydrogen bombs as an "effective primary" to set off another secondary stage, of much higher yield. Can this be reversed in the sense of scaling it down so you have several bombs-within-bombs, all triggered by a really tiny first stage? In other words, can it be applied to neutron bomb design?

ABOVE: 16 kt at 600m altitude nuclear explosion on a city, Hiroshima ground zero (in foreground) showing modern concrete buildings surviving nearby (unlike the wooden ones that mostly burned at the peak of the firestorm 2-3 hours after survivors had evacuated), in which people were shielded from most of the radiation and blast winds, as they were in simple shelters.

The 1946 Report of the British Mission to Japan, The Effects of the Atomic Bombs at Hiroshima and Nagasaki, compiled by a team of 16 in Hiroshima and Nagasaki during November 1945, which included 10 UK Home Office civil defence experts (W. N. Thomas, J. Bronowski, D. C. Burn, J. B. Hawker, H. Elder, P. A. Badland, R. W. Bevan, F. H. Pavry, F. Walley, O. C. Young, S. Parthasarathy, A. D. Evans, O. M. Solandt, A. E. Dark, R. G. Whitehead and F. G. S. Mitchell) found: "Para. 26. Reinforced concrete buildings of very heavy construction in Hiroshima, even when within 200 yards of the centre of damage, remained structurally undamaged. ... Para 28. These observations make it plain that reinforced concrete framed buildings can resist a bomb of the same power detonated at these heights, without employing fantastic thicknesses of concrete. ... Para 40. The provision of air raid shelters throughout Japan was much below European standards. ... in Hiroshima ... they were semi-sunk, about 20 feet long, had wooden frames, and 1.5-2 feet of earth cover. ... Exploding so high above them, the bomb damaged none of these shelters. ... Para 42. These observations show that the standard British shelters would have performed well against a bomb of the same power exploded at such a height. Anderson shelters, properly erected and covered, would have given protection. Brick or concrete surfac shelters with adequate reinforcement would have remained safe from collapse. The Morrison shelter is designed only to protect its occupants from the refuge load of a house, and this it would have done. Deep shelters such as the refuge provided by the London Underground would have given complete protection. ... Para 60. Buildings and walls gave complete protection from flashburn."

Glasstone and Dolan's 1977 Effects of Nuclear Weapons in Table 12.21 on p547 flunks making this point by giving data without citing its source to make it credible to readers: it correlated 14% mortality (106 killed out of 775 people in Hiroshima's Telegraph Office) to "moderate damage" at 500m in Hiroshima (the uncited "secret" source was NP-3041, Table 12, applying to unwarned people inside modern concrete buildings).

"A weapon whose basic design would seem to provide the essence of what Western morality has long sought for waging classical battlefield warfare - to keep the war to a struggle between the warriors and exclude the non-combatants and their physical assets - has been violently denounced, precisely because it achieves this objective." - Samuel T. Cohen (quoted in Chapman Pincher, The secret offensive, Sidgwick and Jackson, London, 1985, Chapter 15: The Neutron Bomb Offensive, p210).

The reality is, dedicated enhanced neutron tactical nuclear weapons were used to credibly deter the concentrations of force required for triggering of WWIII during the 1st Cold War, and the thugs who support Russian propaganda for Western disarmament got rid of them on our side, but not on the Russian side. Air burst neutron bombs or even as subsurface earth penetrators of relatively low fission yield (where the soil converts energy that would otherwise escape as blast and radiation into ground shock for destroying buried tunnels - new research on cratering shows that a 20 kt subsurface burst creates similar effects on buried hard targets as a 1 Mt surface burst), they cause none of the vast collateral damage to civilians that we see now in Ukraine and Gaza, or that we saw in WWII and the wars in Korea and Vietnam. This is 100% contrary to CND propaganda which is a mixture of lying on nuclear explosion collateral damage, escalation/knockout blow propaganda (of the type used to start WWII by appeasers) and lying on the designs of nuclear weapons in order to ensure the Western side (but not the thugs) gets only incredible "strategic deterrence" that can't deter the invasions that start world wars (e.g. Belgium in 1914 and Poland in 1939.) "Our country entered into an agreement in Budapest, Hungary when the Soviet Union was breaking up that we would guarantee the independence of Ukraine." - Tom Ramos. There really is phoney nuclear groupthink left agenda politics at work here: credible relatively clean tactical nuclear weapons are banned in the West but stocked by Russia, which has civil defense shelters to make its threats far more credible than ours! We need low-collateral damage enhanced-neutron and earth-penetrator options for the new Western W93 warhead, or we remain vulnerable to aggressive coercion by thugs, and invite invasions. Ambiguity, the current policy ("justifying" secrecy on just what we would do in any scenario) actually encourages experimental provocations by enemies to test what we are prepared to do (if anything), just as it did in 1914 and the 1930s.

ABOVE: 0.2 kt (tactical yield range) Ruth nuclear test debris, with lower 200 feet of the 300 ft steel tower surviving in Nevada, 1953. Note that the yield of the tactical invasion-deterrent Mk54 Davy Crockett was only 0.02 kt, 10 times less than than 0.2 kt Ruth.

It should be noted that cheap and naive "alternatives" to credible deterrence of war were tried in the 1930s and during the Cold War and afterwards, with disastrous consequences. Heavy "peaceful" oil sanctions and other embargoes against Japan for its invasion of China between 1931-7 resulted in the plan for the Pearl Harbor surprise attack of 7 December 1941, with subsequent escalation to incendiary city bombing followed nuclear warfare against Hiroshima and Nagasaki. Attlee's pressure on Truman to guarantee no use of tactical nuclear weapons in the Korean War (leaked straight to Stalin by the Cambridge Spy Ring), led to an escalation of that war causing the total devastation of the cities of that country by conventional bombing (a sight witnessed by Sam Cohen, that motivated his neutron bomb deterrent of invasions), until Eisenhower was elected and reversed Truman's decision, leading not to the "escalatory Armageddon" assertions of Attlee, but to instead to a peaceful armistice! Similarly, as Tom Ramos argues in From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Kennedy's advisers who convinced him to go ahead with the moonlit 17 April 1961 Bay of Pigs invasion of Cuba without any USAF air support, which led to precisely what they claimed they would avoid: an escalation of aggression from Russia in Berlin, with the Berlin Wall going up on 17 August 1961 because any showing weakness to an enemy, as in the bungled invasion of Cuba, is always a green light to dictators to go ahead with revolutions, invasions and provocations everywhere else. Rather than the widely hyped autistic claims from disarmers and appeasers about "weakness bringing peace by demonstrating to the enemy that they have nothing to fear from you", the opposite result always occurs. The paranoid dictator seizes the opportunity to strike first. Similarly, withdrawing from Afghanistan in 2021 was a clear green light to Russia to go ahead with a full scale invasion of Ukraine, reigniting the Cold War. von Neumann and Morgenstein's Minimax theorem for winning games - minimise the maximum possible loss - fails with offensive action in war because it sends a signal of weakness to the enemy, which does not treat war as a game with rules to be obeyed. Minimax is only valid for defense, such as civil defense shelters used by Russia to make their threats more credible than ours. The sad truth is that cheap fixes don't work, no matter how much propaganda is behind them. You either need to militarily defeat the enemy or at least economically defeat them using proven Cold War arms race techniques (not merely ineffective sanctions, which they can bypass by making alliances with Iran, North Korea, and China). Otherwise, you are negotiating peace from a position of weakness, which is called appeasement, or collaboration with terrorism.

"Following the war, the Navy Department was intent to see the effects of an atomic blast on naval warships ... the press was invited to witness this one [Crossroads-Able, 23.5 kt at 520 feet altitude, 1 July 1946, Bikini Atoll]. ... The buildup had been too extravagant. Goats that had been tethered on warship decks were still munching their feed, and the atoll's palm trees remained standing, unscathed. The Bikini test changed public attitudes. Before July 1, the world stood in awe of a weapon that had devastated two cities and forced the Japanese Empire to surrender. After that date, the bomb was still a terrible weapon, but a limited one." - Tom Ramos (LLNL nuclear weaponeer and nuclear pumped X-ray laser developer), From Berkeley to Berlin: How the Rad Lab Helped Prevent Nuclear War, Naval Institute Press, 2022, pp43-4.

ABOVE: 16 February 1950 Daily Express editorial on H Bomb problem due to the fact that the UN is another virtue signalling but really war mongering League of Nations (which oversaw Nazi appeasement and the outbreak of WWII); however Fuchs had attended the April 1946 Super Conference during which the Russian version of the H-bomb involving isentropic radiation implosion of a separate low-density fusion stage (unlike Teller's later dense metal ablation rocket implosion secondary TX14 Alarm Clock and Sausage designs) were discussed and then given to Russia. The media was made aware only that Fuchs hade given the fission bomb to Russia. The FBI later visited Fuchs in British jail, showed him a film of Harry Gold (whom Fuchs identified as his contact while at Los Alamos) and also gave Fuchs a long list of secret reports to mark off individually so that they knew precisely what Stalin had been given. Truman didn't order H-bomb research and development because Fuchs gave Stalin the A-bomb, but because he gave them the H-bomb. The details of the Russian H-bomb are still being covered up by those who want a repetition of 1930s appeasement, or indeed the deliberate ambiguity of the UK Cabinet in 1914 which made it unclear what the UK would do if Germany invaded Belgium, allowing the enemy to exploit that ambiguity, starting a world war. The key fact usually covered up (Richard Rhodes, Chuck Hansen, and the whole American "expert nuclear arms community" all misleadingly claim that Teller's Sausage H-bomb design with a single primary and a dense ablator around a cylindrical secondary stage - uranium, lead or tungsten - is the "hydrogen bomb design") here is that two attendees of the April 1946 Super Conference, the report author Egon Bretscher and the radiation implosion discoverer Klaus Fuchs - were British, and both contributed key H-bomb design principles to the Russian and British weapons (discarded for years by America). Egon Bretscher for example wrote up the Super Conference report, during which attendees suggested various ways to try to achieve isentropic compression of low-density fusion fuel (a concept discarded by Teller's 1951 Sausage design, but used by Russia and re-developed in America on Nuckolls 1962 Ripple tests), and after Teller left Los Alamos, Bretscher took over work on Teller's Alarm Clock layered fission-fusion spherical hybrid device before Bretscher himself left Los Alamos and became head of nuclear physics at Harwell, UK,, submitting UK report together with Fuchs (head of theoretical physics at Harwell) which led to Sir James Chadwick's UK paper on a three-stage thermonuclear Super bomb which formed the basis of Penney's work at the UK Atomic Weapons Research Establishment. While Bretscher had worked on Teller's hybrid Alarm Clock (which originated two months after Fuchs left Los Alamos), Fuchs co-authored a hydrogen bomb patent with John von Neumann, in which radiation implosion and ionization implosion was used. Between them, Bretscher and Fuchs had all the key ingredients. Fuchs leaked them to Russia and the problem persists today in international relations.

ILLUSTRATION: the threat of WWII and the need to deter it was massively derided by popular pacifism which tended to make "jokes" of the Nazi threat until too late (example of 1938 UK fiction on this above; Charlie Chaplin's film "The Great Dictator" is another example), so three years after the Nuremberg Laws and five years after illegal rearmament was begun by the Nazis, in the UK crowds of "pacifists" in Downing Street, London, support friendship with the top racist, dictatorial Nazis in the name of "world peace". The Prime Minister used underhand techniques to try to undermine appeasement critics like Churchill and also later to get W. E. Johns fired from both editorships of Flying (weekly) and Popular Flying (monthly) to make it appear everybody "in the know" agreed with his actions, hence the contrived "popular support" for collaborating with terrorists depicted in these photos. The same thing persists today; the 1920s and 1930s "pacifist" was also driven by "escalation" and "annihilation" claims explosions, fire and WMD poison gas will kill everybody in a "knockout blow", immediately any war breaks out.

Update (4 January 2024): on the important world crisis, https://vixra.org/abs/2312.0155 gives a detailed review of "Britain and the H-bomb" (linked here), and why the "nuclear deterrence issue" isn't about "whether we should deter evil", but precisely what design of nuclear warhead we should have in order to do that cheaply, credibly, safely, and efficiently without guaranteeing either escalation or the failure of deterrence. When we disarmed our chemical and biological weapons, it was claimed that the West could easily deter those weapons using strategic nuclear weapons to bomb Moscow (which has shelters, unlike us). That failed when Putin used sarin and chlorine to prop up Assad in Syria, and Novichok in the UK to kill Dawn Sturgess in 2018. So it's just not a credible deterrent to say you will bomb Moscow if Putin invades Europe or uses his 2000 tactical nuclear weapons. An even more advanced deterrent, the 100% clean very low yield (or any yield) multiplicative staged design without any fissile material whatsoever, just around the corner. Clean secondary stages have been proof-tested successfully for example in the 100% clean Los Alamos Redwing Navajo secondary, and the 100% clean Ripple II secondary tested 30 October 1962, and the laser ignition of very tiny fusion capsules to yield more energy than supplied has been done on 5 December 2022 when a NIF test delivered 2.05 MJ (the energy of about 0.5 kg of TNT) to a fusion capsule which yielded 3.15 MJ, so all that is needed is to combine both ideas in a system whereby suitably sized second stages - ignited in the first place by a capacitative charged circuit sending a pulse of energy to a suitable laser system (the schematic shown is just a sketch of principle - more than one laser would possibly be required for reliability of fusion ignition) acting on tiny fusion capsule as shown - are encased to two-stage "effective primaries" which each become effective primaries of bigger systems, thus a geometric series of multiplicative staging until the desired yield is reached. Note that the actual tiny first T+D capsule can be compressed by one-shot lasers - compact lasers used way beyond their traditional upper power limit and burned out in a firing a single pulse - in the same way the gun assembly of the Hiroshima bomb was based on a one-shot gun. In other words, forget all about textbook gun design. The Hiroshima bomb gun assembly system only had to be fired once, unlike a field artillery piece which has to be ready to be fired many thousands of times (before metal fatigue/cracks set in). Thus, by analogy, the lasers - which can be powered by ramping current pulses from magnetic flux compressor systems - for use in a clean bomb will be much smaller and lighter than current lab gear which is designed to be used thousands of times in repeated experiments. The diagram below shows cylindrical Li6D stages throughout for a compact bomb shape, but spherical stages can be used, and once a few stages get fired, the flux of 14 MeV neutrons is sufficient to go to cheap natural LiD. To fit it into a MIRV warhead, the low density of LiD constrains such a clean warhead will have a low nuclear yield, which means a tactical neutron deterrent of the invasions that cause big wars; a conversion of incredible strategic deterrence into a more credible combined strategic-tactical deterrent of major provocations, not just direct attacks. It should also be noted that in 1944 von Neumann suggested that T + D inside the core of the fission weapon would be compressed by "ionization compression" during fission (where a higher density ionized plasma compresses a lower density ionized plasma, i.e. the D + T plasma), an idea that was - years later - named the Internal Booster principle by Teller; see Frank Close, "Trinity", Allen Lane, London, 2019, pp158-159 where Close argues that during the April 1946 Superbomb Conference, Fuchs extended von Neumann's 1944 internal fusion boosting idea to an external D + T filled BeO walled capsule:

"Fuchs reasoned that [the very low energy, 1-10 kev, approximately 10-100 lower energy than medical] x-rays from the [physically separated] uranium explosion would reach the tamper of beryllium oxide, heat it, ionize the constituents and cause them to implode - the 'ionization implosion' concept of von Neumann but now applied to deuterium and tritium contained within beryllium oxide. To keep the radiation inside the tamper, Fuchs proposed to enclose the device inside a casing impervious to radiation. The implosion induced by the radiation would amplify the compression ... and increase the chance of the fusion bomb igniting. The key here is 'separation of the atomic charge and thermonuclear fuel, and compression of the latter by radiation travelling from the former', which constitutes 'radiation implosion'." (This distinction between von Neumann's "ionization implosion" INSIDE the tamper, of denser tamper expanding and thus compressing lower density fusion fuel inside, and Fuchs' OUTSIDE capsule "radiation implosion", is key even today for isentropic H-bomb design; it seems Teller's key breakthroughs were not separate stages or implosion but rather radiation mirrors and ablative recoil shock compression, where radiation is used to ablate a dense pusher of Sausage designs like Mike in 1952 etc., a distinction not to be confused for the 1944 von Neumann and 1946 Fuchs implosion mechanisms!

It appears Russian H-bombs used von Neumann's "ionization implosion" and Fuchs's "radiation implosion" for RDS-37 on 22 November 1955 and also in their double-primary 23 February 1958 test and subsequently, where their fusion capsules reportedly contained a BeO or other low-density outer coating, which would lead to quasi-isentropic compression, more effective for low density secondary stages than purely ablative recoil shock compression. This accounts for the continuing classification of the April 1946 Superbomb Conference (the extract of 32 pages linked here is so severely redacted that it is less helpful than the brief but very lucid summary of its technical content, in the declassified FBI compilation of reports concerning data Klaus Fuchs sent to Stalin, linked here!). Teller had all the knowledge he needed in 1946, but didn't go ahead because he made the stupid error of killing progress off by his own "no-go theorem" against compression of fusion fuel. Teller did a "theoretical" calculation in which he claimed that compression has no effect on the amount of fusion burn because the compressed system is simply scaled down in size so that the same efficiency of fusion burn occurs, albeit faster, and then stops as the fuel thermally expands. This was wrong. Teller discusses the reason for his great error in technical detail during his tape-recorded interview by Chuck Hansen at Los Alamos on 7 June 1993 (C. Hansen, Swords of Armageddon, 2nd ed., pp. II-176-7):

"Now every one of these [fusion] processes varied with the square of density. If you compress the thing, then in one unit's volume, each of the 3 important processes increased by the same factor ... Therefore, compression (seemed to be) useless. Now when ... it seemed clear that we were in trouble, then I wanted very badly to find a way out. And it occurred to be than an unprecedentedly strong compression will just not allow much energy to go into radiation. Therefore, something had to be wrong with my argument and then, you know, within minutes, I knew what must be wrong ... [energy] emission occurs when an electron and a nucleus collide. Absorption does not occur when a light quantum and a nucleus ... or ... electron collide; it occurs when a light quantum finds an electron and a nucleus together ... it does not go with the square of the density, it goes with the cube of the density." (This very costly theoretical error, wasting five years 1946-51, could have been resolved by experimental nuclear testing. There is always a risk of this in theoretical physics, which is why experiments are done to check calculations before prizes are handed out. The ban on nuclear testing is a luddite opposition to technological progress in improving deterrence.)

(This 1946-51 theoretical "no-go theorem" anti-compression error of Teller's, which was contrary to the suggestion of compression at the April 1946 superbomb conference as Teller himself refers to on 14 August 1952, and which was corrected only by comparison of the facts about compression validity in pure fission cores in Feb '51 after Ulam's argument that month for fission core compression by lens focussed primary stage shock waves, did not merely lead to Teller's dismissal of vital compression ideas. It also led to his false equations - exaggerating the cooling effect of radiation emission - causing underestimates of fusion efficiency in all theoretical calculations done of fusion until 1951! For this reason, Teller later repudiated the calculations that allegedly showed his Superbomb would fizzle; he argued that if it had been tested in 1946, the detailed data obtained - regardless of whatever happened - would have at least tested the theory which would have led to rapid progress, because the theory was wrong. The entire basis of the cooling of fusion fuel by radiation leaking out was massively exaggerated until Lawrence Livermore weaponeer John Nuckolls showed that there is a very simple solution: use baffle re-radiated, softened x-rays for isentropic compression of low-density fusion fuel, e.g. very cold 0.3 kev x-rays rather than the usual 1-10 kev cold-warm x-rays emitted directly from the fission primary. Since the radiation losses are proportional to the fourth-power of the x-ray energy or temperature, losses are virtually eliminated, allowing very efficient staging as for Nuckolls' 99.9% 10 Mt clean Ripple II, detonated on 30 October 1962 at Christmas Island. Teller's classical Superbomb was actually analyzed by John C. Solem in a 15 December 1978 report, A modern analysis of Classical Super, LA-07615, according to a Freedom of Information Act request filed by mainstream historian Alex Wellerstein, FOIA 17-00131-H, 12 June 2017; according to a list of FOIA requests at https://www.governmentattic.org/46docs/NNSAfoiaLogs_2016-2020.pdf. However, a google search for the documents Dr Wellerstein requested shows only a few at the US Gov DOE Opennet OSTI database or otherwise online yet e.g. LA-643 by Teller, On the development of Thermonuclear Bombs dated 16 Feb. 1950. The page linked here stating that report was "never classified" is mistaken! One oddity about Teller's anti-compression "no-go theorem" is that the even if fusion rates were independent of density, you would still want compression of fissile material in a secondary stage such as a radiation imploded Alarm Clock, because the whole basis of implosion fission bombs is the benefit of compression; another issue is that even if fusion rates are unaffected by density, inward compression would still help to delay the expansion of the fusion system which leads to cooling and quenching of the fusion burn.)

ABOVE: the FBI file on Klaus Fuchs contains a brief summary of the secret April 1946 Super Conference at Los Alamos which Fuchs attended, noting that compression of fusion fuel was discussed by Lansdorf during the morning session on 19 April, attended by Fuchs, and that: "Suggestions were made by various people in attendance as to the manner of minimizing the rise in entropy during compression." This fact is vitally interesting, since it proves that an effort was being made then to secure isentropic compression of low-density fusion fuel in April 1946, sixteen years before John H. Nuckolls tested the isentropically compressed Ripple II device on 30 October 1962, giving a 99.9% clean 10 megaton real H-bomb! So the Russians were given a massive head start on this isentropic compression of low-density fusion fuel for hydrogen bombs, used (according to Trutnev) in both the single primary tests like RDS-37 in November 1955 and also in the double-primary designs which were 2.5 times more efficient on a yield-to-mass basis, tested first on 23 February 1958! According to the FBI report, the key documents Fuchs gave to Russia were LA-551, Prima facie proof of the feasibility of the Super, 15 Apr 1946 and the LA-575 Report of conference on the Super, 12 June 1946. Fuchs also handed over to Russia his own secret Los Alamos reports, such as LA-325, Initiator Theory, III. Jet Formation by the Collision of Two Surfaces, 11 July 1945, Jet Formation in Cylindrical lmplosion with 16 Detonation Points, Secret, 6 February 1945, and Theory of Initiators II, Melon Seed, Secret, 6 January 1945. Note the reference to Bretscher attending the Super Conference with Fuchs; Teller in a classified 50th anniversary conference at Los Alamos on the H-bomb claimed that after he (Teller) left Los Alamos for Chicago Uni in 1946, Bretscher continued work on Teller's 31 August 1946 "Alarm Clock" nuclear weapon (precursor of the Mike sausage concept etc) at Los Alamos; it was this layered uranium and fusion fuel "Alarm Clock" concept which led to the departure of Russian H-bomb design from American H-bomb design, simply because Fuchs left Los Alamos in June 1946, well before Teller invented the Alarm Clock concept on 31 August 1946 (Teller remembered the date precisely simply because he invented the Alarm Clock on the day his daughter was born, 31 August 1946! Teller and Richtmyer also developed a variant called "Swiss Cheese", with small pockets or bubbles of expensive fusion fuels, dispersed throughout cheaper fuel, in order to kinder a more cost-effective thermonuclear reaction; this later inspired the fission and fusion boosted "spark plug" ideas in later Sausage designs; e.g. security cleared Los Alamos historian Anne Fitzpatrick stated during her 4 March 1997 interview with Robert Richtmyer, who co-invented the Alarm Clock with Teller, that the Alarm Clock evolved into the spherical secondary stage of the 6.9 megaton Castle-Union TX-14 nuclear weapon!).

In fact (see Lawrence Livermore National Laboratory nuclear warhead designer Nuckolls' explanation in report UCRL-74345): "The rates of burn, energy deposition by charged reaction products, and electron-ion heating are proportional to the density, and the inertial confinement time is proportional to the radius. ... The burn efficiency is proportional to the product of the burn rate and the inertial confinement time ...", i.e. the fusion burn rate is directly proportional to the fuel density, which in turn is of course inversely proportional to the cube of its radius. But the inertial confinement time for fusion to occur is proportional to the radius, so the fusion stage efficiency in a nuclear weapon is the product of the burn rate (i.e., 1/radius^3) and time (i.e., radius), so efficiency ~ radius/(radius^3) ~ 1/radius^2. Therefore, for a given fuel temperature, the total fusion burn, or the efficiency of the fusion stage, is inversely proportional to the square of the compressed radius of the fuel! (Those condemning Teller's theoretical errors or "arrogance" should be aware that he pushed hard all the time for experimental nuclear tests of his ideas, to check if they were correct, exactly the right thing to do scientifically and others who read his papers had the opportunity to point out any theoretical errors, but was rebuffed by those in power, who used a series of contrived arguments to deny progress, based upon what Harry would call "subconscious bias", if not arrogant, damning, overt bigotry against the kind of credible, overwhelming deterrence which had proved lacking a decade earlier, leading to WWII. This callousness towards human suffering in war and under dictatorship existed in some UK physicists too: Joseph Rotblat's hatred of anything to deter Russia be it civil defense or tactical neutron bombs of the West - he had no problem smiling and patting Russia's neutron bomb when visiting their labs during cosy groupthink deluded Pugwash campaigns for Russian-style "peaceful collaboration" - came from deep family communist convictions, since his brother was serving in the Red Army in 1944 when he alleged he heard General Groves declare that the bomb must deter Russia! Rotblat stated he left Los Alamos as a result. The actions of these groups are analogous to the "Cambridge Scientists Anti-War Group" in the 1930s. After Truman ordered a H-bomb, Bradbury at Los Alamos had to start a "Family Committee" because Teller had a whole "family" of H-bomb designs, ranging from the biggest, "Daddy", through various "Alarm Clocks", all the way down to small internally-boosted fission tactical weapons. From Teller's perspective, he wasn't putting all eggs in one basket.)

Above: declassified illustration from a January 1949 secret report by the popular physics author and Los Alamos nuclear weapons design consultant George Gamow, showing his suggestion of using x-rays from both sides of a cylindrically imploded fission device to expose two fusion capsules to x-rays to test whether compression (fusion in BeO box on right side) helps, or is unnecessary (capsule on left side). Neutron counters detect 14.1 Mev T+D neutrons using time-of-flight method (higher energy neutrons traver faster than ~1 Mev fission stage neutrons, arriving at detectors first, allowing discrimination of the neutron energy spectrum by time of arrival). It took over two years to actually fire this 225 kt shot (8 May 1951)! No wonder Teller was outraged. A few interesting reports by Teller and also Oppenheimer's secret 1949 report opposing the H bomb project as it then stood on the grounds of low damage per dollar - precisely the exact opposite of the "interpretation" the media and gormless fools will assert until the cows come home - are linked here. The most interesting is Teller's 14 August 1952 Top Secret paper debunking Hans Bethe's propaganda, by explaining that contrary to Bethe's claims, Stalin's spy Klaus Fuch had the key "radiation implosion"- see second para on p2 - secret of the H-bomb because he attended the April 1946 Superbomb Conference which was not even attended by Bethe!  It was this very fact in April 1946, noted by two British attendees of the 1946 Superbomb Conference before collaboration was ended later in the year by the 1946 Atomic Energy Act, statement that led to Sir James Cladwick's secret use of "radiation implosion" for stages 2 and 3 of his triple staged H-bomb report the next month, "The Superbomb", a still secret document that inspired Penney's original Tom/Dick/Harry staged and radiation imploded H-bomb thinking, which is summarized by security cleared official historian Arnold's Britain and the H-Bomb.  Teller's 24 March 1951 letter to Los Alamos director Bradbury was written just 15 days after his historic Teller-Ulam 9 March 1951 report on radiation coupling and "radiation mirrors" (i.e. plastic casing lining to re-radiate soft x-rays on to the thermonuclear stage to ablate and thus compress it), and states: "Among the tests which seem to be of importance at the present time are those concerned with boosted weapons. Another is connected vith the possibility of a heterocatalytic explosion, that is, implosion of a bomb using the energy from another, auxiliary bomb. A third concerns itself with tests on mixing during atomic explosions, which question is of particular importance in connection with the Alarm Clock."

There is more to Fuchs' influence on the UK H-bomb than I go into that paper; Chapman Pincher alleged that Fuchs was treated with special leniency at his trial and later he was given early release in 1959 because of his contributions and help with the UK H-bomb as author of the key Fuchs-von Neumann x-ray compression mechanism patent. For example, Penney visited Fuchs in June 1952 in Stafford Prison; see pp309-310 of Frank Close's 2019 book "Trinity". Close argues that Fuchs gave Penney a vital tutorial on the H-bomb mechanism during that prison visit. That wasn't the last help, either, since the UK Controller for Atomic Energy Sir Freddie Morgan wrote Penney on 9 February 1953 that Fuchs was continuing to help. Another gem: Close gives, on p396, the story of how the FBI became suspicious of Edward Teller, after finding a man of his name teaching at the NY Communist Workers School in 1941 - the wrong Edward Teller, of course - yet Teller's wife was indeed a member of the Communist-front "League of women shoppers" in Washington, DC.

Chapman Pincher, who attended the Fuchs trial, writes about Fuchs hydrogen bomb lectures to prisoners in chapter 19 of his 2014 autobiography, Dangerous to know (Biteback, London, pp217-8): "... Donald Hume ... in prison had become a close friend of Fuchs ... Hume had repaid Fuchs' friendship by organising the smuggling in of new scientific books ... Hume had a mass of notes ... I secured Fuchs's copious notes for a course of 17 lectures ... including how the H-bomb works, which he had given to his fellow prisoners ... My editor agreed to buy Hume's story so long as we could keep the papers as proof of its authenticity ... Fuchs was soon due for release ..."

Chapman Pincher wrote about this as the front page exclusive of the 11 June 1952 Daily Express, "Fuchs: New Sensation", the very month Penney visited Fuchs in prison to receive his H-bomb tutorial! UK media insisted this was evidence that UK security still wasn't really serious about deterring further nuclear spies, and the revelations finally culminated in the allegations that the MI5 chief 1956-65 Roger Hollis was a Russian fellow-traveller (Hollis was descended from Peter the Great, according to his elder brother Chris Hollis' 1958 book Along the Road to Frome) and GRU agent of influence, codenamed "Elli". Pincher's 2014 book, written aged 100, explains that former MI5 agent Peter Wright suspected Hollis was Elli after evidence collected by MI6 agent Stephen de Mowbray was reported to the Cabinet Secretary. Hollis is alleged to have deliberately fiddled his report of interviewing GRU defector Igor Gouzenko on 21 November 1945 in Canada. Gouzenko had exposed the spy and Groucho Marx lookalike Dr Alan Nunn May (photo below), and also a GRU spy in MI5 codenamed Elli, who used only duboks (dead letter boxes), but Gouzenko told Pincher that when Hollis interviewed him in 1945 he wrote up a lengthy false report claiming to discredit many statements by Gouzenko: "I could not understand how Hollis had written so much when he had asked me so little. The report was full of nonsense and lies. As [MI5 agent Patrick] Stewart read the report to me [during the 1972 investigation of Hollis], it became clear that it had been faked to destroy my credibility so that my information about the spy in MI5 called Elli could be ignored. I suspect that Hollis was Elli." (Source: Pincher, 2014, p320.) Christopher Andrew claimed Hollis couldn't have been GRU spy Elli because KGB defector Oleg Gordievsky suggested it was the KGB spy Leo Long (sub-agent of KGB spy Anthony Blunt). However, Gouzenko was GRU, not KGB like Long and Gordievsky! Gordievsky's claim that "Elli" was on the cover of Long's KGB file was debunked by KGB officer Oleg Tsarev, who found that Long's codename was actually Ralph! Another declassified Russian document, from General V. Merkulov to Stalin dated 24 Nov 1945, confirmed Elli was a GRU agent inside british intelligence, whose existence was betrayed by Gouzenko. In Chapter 30 of Dangerous to Know, Pincher related how he was given a Russian suitcase sized microfilm enlarger by 1959 Hollis spying eyewitness Michael J. Butt, doorman for secret communist meetings in London. According to Butt, Hollis delivered documents to Brigitte Kuczynski, younger sister of Klaus Fuchs' original handler, the notorious Sonia aka Ursula. Hollis allegedly provided Minox films to Brigitte discretely when walking through Hyde Park at 8pm after work. Brigitte gave her Russian made Minox film enlarger to Butt to dispose of, but he kept it in his loft as evidence. (Pincher later donated it to King's College.) Other more circumstantial evidence is that Hollis recruited the spy Philby, Hollis secured spy Blunt immunity from prosecution, Hollis cleared Fuchs in 1943, and MI5 allegedly destroyed Hollis' 1945 interrogation report on Gouzenko, to prevent the airing of the scandal that it was fake after checking it with Gouzenko in 1972.

It should be noted that the very small number of Russian GRU illegal agents in the UK and the very small communist party membership had a relatively large influence on nuclear policy via infiltration of unions which had block votes in the Labour Party, as well the indirect CND and "peace movement" lobbies saturating the popular press with anti-civil defence propaganda to make the nuclear deterrent totally incredible for any provocation short of a direct all-out countervalue attack. Under such pressure, UK Prime Minister Harold Wilson's government abolished the UK Civil Defence Corps, making the UK nuclear deterrent totally incredible against major provocations, in March 1968. While there was some opposition to Wilson, it was focussed on his profligate nationalisation policies which were undermining the economy and thus destabilizing military expenditure for national security. Peter Wright’s 1987 book Spycatcher and various other sources, including Daily Mirror editor Hugh Cudlipp's book Walking on Water, documented that on 8 May 1968, the Bank of England's director Cecil King, who was also Chairman of Daily Mirror newspapers, Mirror editor Cudlipp and the UK Ministry of Defence's anti-nuclear Chief Scientific Adviser Sir Solly Zuckerman, met at Lord Mountbatten's house in Kinnerton Street, London, to discuss a coup e'tat to overthrow Wilson and make Mountbatten the UK President, a new position. King's position, according to Cudlipp - quite correctly as revealed by the UK economic crises of the 1970s when the UK was effectively bankrupt - was that Wilson was setting the UK on the road to financial ruin and thus military decay. Zuckerman and Mountbatten refused to take part in a revolution, however Wilson's government was attacked by the Daily Mirror in a front page editorial by Cecil King two days later, on 10 May 1968, headlined "Enough is enough ... Mr Wilson and his Government have lost all credibility, all authority." According to Wilson's secretary Lady Falkender, Wilson was only told of the coup discussions in March 1976.

CND and the UK communist party alternatively tried to claim, in a contradictory way, that they were (a) too small in numbers to have any influence on politics, and (b) they were leading the country towards utopia via unilateral nuclear disarmament saturation propaganda about nuclear weapons annihilation (totally ignoring essential data on different nuclear weapon designs, yields, heights of burst, the "use" of a weapon as a deterrent to PREVENT an invasion of concentrated force, etc.) via the infiltrated BBC and most other media. Critics pointed out that Nazi Party membership in Germany was only 5% when Hitler became dictator in 1933, while in Russia there were only 200,000 Bolsheviks in September 1917, out of 125 million, i.e. 0.16%. Therefore, the whole threat of such dictatorships is a minority seizing power beyond it justifiable numbers, and controlling a majority which has different views. Traditional democracy itself is a dictatorship of the majority (via the ballot box, a popularity contest); minority-dictatorship by contrast is a dictatorship by the fanatically motivated minority by force and fear (coercion) to control the majority. The coercion tactics used by foreign dictators to control the press in free countries are well documented, but never publicised widely. Hitler put pressure on Nazi-critics in the UK "free press" via UK Government appeasers Halifax, Chamberlain and particularly the loathsome UK ambassador to Nazi Germany, Sir Neville Henderson, for example trying to censor or ridicule appeasement critics David Low, to fire Captain W. E. Johns (editor of both Flying and Popular Flying, which had huge circulations and attacked appeasement as a threat to national security in order to reduce rearmament expenditure), and to try to get Winston Churchill deselected. These were all sneaky "back door" pressure-on-publishers tactics, dressed up as efforts to "ease international tensions"! The same occurred during the Cold War, with personal attacks in Scientific American and Bulletin of the Atomic Scientists and by fellow travellers on Herman Kahn, Eugene Wigner, and others who warned we need civil defence to make a deterrent of large provocations credible in the eyes of an aggressor.

Chapman Pincher summarises the vast hypocritical Russian expenditure on anti-Western propaganda against the neutron bomb in Chapter 15, "The Neutron Bomb Offensive" of his 1985 book The Secret Offensive: "Such a device ... carries three major advantages over Hiroshima-type weapons, particularly for civilians caught up in a battle ... against the massed tanks which the Soviet Union would undoubtedly use ... by exploding these warheads some 100 feet or so above the massed tanks, the blast and fire ... would be greatly reduced ... the neutron weapon produces little radioactive fall-out so the long-term danger to civilians would be very much lower ... the weapon was of no value for attacking cities and the avoidance of damage to property can hardly be rated as of interest only to 'capitalists' ... As so often happens, the constant repetition of the lie had its effects on the gullible ... In August 1977, the [Russian] World Peace Council ... declared an international 'Week of action' against the neutron bomb. ... Under this propaganda Carter delayed his decision, in September ... a Sunday service being attended by Carter and his family on 16 October 1977 was disrupted by American demonstrators shouting slogans against the neutron bomb [see the 17 October 1977 Washington Post] ... Lawrence Eagleburger, when US Under Secretary of State for Political Affairs, remarked, 'We consider it probably that the Soviet campaign against the 'neutron bomb cost some $100 million'. ... Even the Politburo must have been surprised at the size of what it could regard as a Fifth Column in almost every country." [Unfortunately, Pincher himself had contributed to the anti-nuclear nonsense in his 1965 novel "Not with a bang" in which small amounts of radioactivity from nuclear fallout combine with medicine to exterminate humanity! The allure of anti-nuclear propaganda extends to all who which to sell "doomsday fiction", not just Russian dictators but mainstream media story tellers in the West. By contrast, Glasstone and Dolan's 1977 Effects of Nuclear Weapons doesn't even mention the neutron bomb, so there was no scientific and technical effort whatsoever by the West to make it a credible deterrent even in the minds of the public it had to protect from WWIII!]

"The Lance warhead is the first in a new generation of tactical mini-nukes that have been sought by Army field leading advocates: the series of American generals who have commanded the North Atlantic Treaty organization theater. They have argued that the 7,000 unclear warheads now in Europe are old, have too large a nuclear yield and thus would not be used in a war. With lower yields and therefore less possible collateral damage to civilian populated areas, these commanders have argued, the new mini-nukes are more credible as deterrents because they just might be used on the battlefield without leading to automatic nuclear escalation. Under the nuclear warhead production system, a President must personally give the production order. President Ford, according to informed sources, signed the order for the enhanced-radiation Lance warhead. The Lance already has regular nuclear warheads and it deployed with NATO forces in Europe. In addition to the Lance warhead, other new production starts include: An 8-inch artillery-fired nuclear warhead to replace those now in Europe. This shell had been blocked for almost eight years by Sen. Stuart Symington (D-Mo.), who had argued that it was not needed. Symington retired last year. The Pentagon and ERDA say the new nuclear 8-inch warhead would be safer from stealing by terrorists. Starbird testified. It will be "a command disable system" to melt its inner workings if necessary. ... In longer-term research, the bill contains money to finance an enhanced-radiational bomb to the dropped from aircraft." - Washington post, 5 June 1977.

This debunks fake news that Teller's and Ulam's 9 March 1951 report LAMS-1225 itself gave Los Alamos the Mike H-bomb design, ready for testing! Teller was proposing a series of nuclear tests of the basic principles, not 10Mt Ivy-Mike which was based on a report the next month by Teller alone, LA-1230, "The Sausage: a New Thermonuclear System". When you figure that, what did Ulam actually contribute to the hydrogen bomb? Nothing about implosion, compression or separate stages - all already done by von Neumann and Fuchs five years earlier - and just a lot of drivel about trying to channel material shock waves from a primary to compress another fissile core, a real dead end. What Ulam did was to kick Teller out of his self-imposed mental objection to compression devices. Everything else was Teller's; the radiation mirrors, the Sausage with its outer ablation pusher and its inner spark plug. Note also that contrary to official historian Arnold's book (which claims due to a misleading statement by Dr Corner that all the original 1946 UK copies of Superbomb Conference documentation were destroyed after being sent from AWRE Aldermaston to London between 1955-63), all the documents did exist in the AWRE TPN (theoretical physics notes, 100% of which have been perserved) and are at the UK National Archives, e.g. AWRE-TPN 5/54 is listed in National Archives discovery catalogue ref ES 10/5: "Miscellaneous super bomb notes by Klaus Fuchs", see also the 1954 report AWRE-TPN 6/54, "Implosion super bomb: substitution of U235 for plutonium" ES 10/6, the 1954 report AWRE-TPN 39/54 is "Development of the American thermonuclear bomb: implosion super bomb" ES 10/39, see also ES 10/21 "Collected notes on Fermi's super bomb lectures", ES 10/51 "Revised reconstruction of the development of the American thermonuclear bombs", ES 1/548 and ES 1/461 "Superbomb Papers", etc. Many reports are secret and retained, despite containing "obsolete" designs (although UK report titles are generally unredacted, such as: "Storage of 6kg Delta (Phase) -Plutonium Red Beard (tactical bomb) cores in ships")! It should also be noted that the Livermore Laboatory's 1958 TUBA spherical secondary with an oralloy (enriched U235) outer pusher was just a reversion from Teller's 1951 core spark plug idea in the middle of the fusion fuel, back to the 1944 von Neumann scheme of having fission material surrounding the fusion fuel. In other words, the TUBA was just a radiation and ionization imploded, internally fusion-boosted, second fission stage which could have been accomplished a decade earlier if the will existed, when all of the relevant ideas were already known. The declassified UK spherical secondary-stage alternatives linked here (tested as Grapple X, Y and Z with varying yields but similar size, since all used the 5 ft diameter Blue Danube drop casing) clearly show that a far more efficient fusion burn occurs by minimising the mass of hard-to-compress U235 (oralloy) sparkplug/pusher, but maximising the amount of lithium-7, not lithium-6. Such a secondary with minimal fissionable material also automatically has minimal neutron ABM vulnerability (i.e., "Radiation Immunity", RI). This is the current cheap Russian neutron weapon design, but not the current Western design of warheads like the W78, W88 and bomb B61.

So why on earth doesn't the West take the cheap efficient option of cutting expensive oralloy and maximising cheap natural (mostly lithium-7) LiD in the secondary? Even Glasstone's 1957 Effects of Nuclear Weapons on p17 (para 1.55) states that "Weight for weight ... fusion of deuterium nuclei would produce nearly 3 times as much energy as the fission of uranium or plutonium"! The sad answer is "density"! Natural LiD (containing 7.42% Li6 abundance) is a low density white/grey crystalline solid like salt that actually floats on water (lithium deuteroxide would be formed on exposure to water), since its density is just 820 kg/m^3. Since the ratio of mass of Li6D to Li7D is 8/9, it would be expected that the density of highly enriched 95% Li6D is 739 kg/m^3, while for 36% enriched Li6D it is 793 kg/m^3. Uranium metal has a density of 19,000 kg/m^3, i.e. 25.7 times greater than 95% enriched li6D or 24 times greater than 36% enriched Li6D. Compactness, i.e. volume is more important in a Western MIRV warhead than mass/weight! In the West, it's best to have a tiny-volume, very heavy, very expensive warhead. In Russia, cheapness outweights volume considerations. The Russians in some cases simply allowed their more bulky warheads to protrude from the missile bus (see photo below), or compensated for lower yields at the same volume using clean LiD by using the savings in costs to build more warheads. (The West doubles the fission yield/mass ratio of some warheads by using U235/oralloy pushers in place of U238, which suffers from the problem that about half the neutrons it interacts with result in non-fission capture, as explained below. Note that the 720 kiloton UK nuclear test Orange Herald device contained a hollow shell of 117 kg of U235 surrounded by a what Lorna Arnold's book quotes John Corner referring to a "very thin" layer of high explosive, and was compact, unboosted - the boosted failed to work - and gave 6.2 kt/kg of U235, whereas the first version of the 2-stage W47 Polaris warhead contained 60 kg of U235 which produced most of the secondary stage yield of about 400 kt, i.e. 6.7 kt/kg of U235. Little difference - but because perhaps 50% of the total yield of the W47 was fusion, its efficiency of use of U235 must have actually been less than the Orange Herald device, around 3 kt/kg of U235 which indicates design efficiency limits to "hydrogen bombs"! Yet anti-nuclear charlatans claimed that the Orange Herald bomb was a con!)

ABOVE: USA nuclear weapons data declassified by UK Government in 2010 (the information was originally acquired due to the 1958 UK-USA Act for Cooperation on the Uses of Atomic Energy for Mutual Defense Purposes, in exchange for UK nuclear weapons data) as published at http://nuclear-weapons.info/images/tna-ab16-4675p63.jpg. This single table summarizes all key tactical and strategic nuclear weapons secret results from 1950s testing! (In order to analyze the warhead pusher thicknesses and very basic schematics from this table it is necessary to supplement it with the 1950s warhead design data declassified in other documents, particularly some of the data from Tom Ramos and Chuck Hansen, as quoted in some detail below.) The data on the mass of special nuclear materials in each of the different weapons argues strongly that the entire load of Pu239 and U235 in the 1.1 megaton B28 was in the primary stage, so that weapon could not have had a fissile spark plug in the centre let alone a fissile ablator (unlike Teller's Sausage design of 1951), and so the B28 it appears had no need whatsoever of a beryllium neutron radiation shield to prevent pre-initiation of the secondary stage prior to its compression (on the contrary, such neutron exposure of the lithium deuteride in the secondary stage would be VITAL to produce some tritium in it prior to compression, to spark fusion when it was compressed). Arnold's book indeed explains that UK AWE physicists found the B28 to be an excellent, highly optimised, cheap design, unlike the later W47 which was extremely costly. The masses of U235 and Li6 in the W47 shows the difficulties of trying to maintain efficiency while scaling down the mass of a two-stage warhead for SLBM delivery: much larger quantities of Li6 and U235 must be used to achieve a LOWER yield! To achieve thermonuclear warheads of low mass at sub-megaton yields, both the outer bomb casing and the pusher around the the fusion fuel must be reduced:

"York ... studied the Los Alamos tests in Castle and noted most of the weight in thermonuclear devices was in their massive cases. Get rid of the case .... On June 12, 1953, York had presented a novel concept ... It radically altered the way radiative transport was used to ignite a secondary - and his concept did not require a weighty case ... they had taken the Teller-Ulam concept and turned it on its head ... the collapse time for the new device - that is, the amount of time it took for an atomic blast to compress the secondary - was favorable compared to older ones tested in Castle. Brown ... gave a female name to the new device, calling it the Linda." - Dr Tom Ramos (Lawrence Livermore National Laboratory nuclear weapon designer), From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Naval Institute press, 2022, pp137-8. (So if you reduce the outer casing thickness to reduce warhead weight, you must complete the pusher ablation/compression faster, before the thinner outer casing is blown off, and stops reflecting/channelling x-rays on the secondary stage. Making the radiation channel smaller and ablative pusher thinner helps to speed up the process. Because the ablative pusher is thinner, there is relatively less blown-off debris to block the narrower radiation channel before the burn ends.)

"Brown's third warhead, the Flute, brought the Linda concept down to a smaller size. The Linda had done away with a lot of material in a standard thermonuclear warhead. Now the Flute tested how well designers could take the Linda's conceptual design to substantially reduce not only the weight but also the size of a thermonuclear warhead. ... The Flute's small size - it was the smallest thermonuclear device yet tested - became an incentive to improve codes. Characteristics marginally important in a larger device were now crucially important. For instance, the reduced size of the Flute's radiation channel could cause it to close early [with ablation blow-off debris], which would prematurely shut off the radiation flow. The code had to accurately predict if such a disaster would occur before the device was even tested ... the calculations showed changes had to be made from the Linda's design for the Flute to perform correctly." - Dr Tom Ramos (Lawrence Livermore National Laboratory nuclear weapon designer), From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Naval Institute press, 2022, pp153-4. Note that the piccolo (the W47 secondary) is a half-sized flute, so it appears that the W47's secondary stage design miniaturization history was: Linda -> Flute -> Piccolo:

"A Division's third challenge was a small thermonuclear warhead for Polaris [the nuclear SLBM submarine that preceeded today's Trident system]. The starting point was the Flute, that revolutionary secondary that had performed so well the previous year. Its successor was called the Piccolo. For Plumbbob [Nevada, 1957], the design team tested three variations of the Piccolo as a parameter test. One of the variants outperformed the others ... which set the stage for the Hardtack [Nevada and Pacific, 1958] tests. Three additional variations for the Piccolo ... were tested then, and again an optimum candidate was selected. ... Human intuition as well as computer calculations played crucial roles ... Finally, a revolutionary device was completed and tested ... the Navy now had a viable warhead for its Polaris missile. From the time Brown gave Haussmann the assignment to develop this secondary until the time they tested the device in the Pacific, only 90 days had passed. As a parallel to the Robin atomic device, this secondary for Polaris laid the foundation for modern thermonuclear weapons in the United States." - Dr Tom Ramos (Lawrence Livermore National Laboratory nuclear weapon designer), From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War, Naval Institute press, 2022, pp177-8. (Ramos is very useful in explaining that many of the 1950s weapons with complex non-spherical, non-cylindrical shaped primaries and secondaries were simply far too complex to fully simulate on the really pathetic computers they had - Livermore got a 4,000 vacuum tubes-based IBM 701 with 2 kB memory in 1956, AWRE Aldermaston in the Uk had to wait another year for theirs - so they instead did huge numbers of experimental explosive tests. For instance, on p173, Ramos discloses that the Swan primary which developed into the 155mm tactical shell, "went through over 100 hydrotests", non-nuclear tests in which fissile material is replaced with U238 or other substitutes, and the implosion is filmed with flash x-ray camera systems.)

"An integral feature of the W47, from the very start of the program, was the use of an enriched uranium-235 pusher around the cylindrical secondary." - Chuck Hansen, Swords 2.0, p. VI-375 (Hansen's source is his own notes taken during a 19-21 February 1992 nuclear weapons history conference he attended; if you remember the context, "Nuclear Glasnost" became fashionable after the Cold War ended, enabling Hansen to acquire almost unredacted historical materials for a few years until nuclear proliferation became a concern in Iraq, Afghanistan, Iran and North Korea). The key test of the original (Robin primary and Piccolo secondary) Livermore W47 was 412 kt Hardtack-Redwood on 28 June 1958. Since Li6D utilized at 100% efficiency would yield 66 kt/kg, the W47 fusion efficiency was only about 6%; since 100% fission of u235 yields 17 kt/kg, the W47's Piccolo fission (the u235 pusher) efficiency was about 20%; the comparable figures for secondary stage fission and fusion fuel burn efficiencies in the heavy B28 are about 7% and 15%, respectively:

ABOVE: the heavy B28 gave a very "big bang for the buck": it was cheap in terms of expensive Pu, U235 and Li6, and this was the sort of deterrent which was wanted by General LeMay for the USAF, which wanted as many weapons as possible, within the context of Eisenhower's budgetary concerns. But its weight (not its physical size) made it unsuitable for SLBM Polaris warheads. The first SLBM warhead, the W47, was almost the same size as the B28 weapon package, but much lighter due to having a much thinner "pusher" on the secondary, and casing. But this came at a large financial cost in terms of the quantities of special nuclear materials required to get such a lightweight design to work, and also a large loss of total yield. The fusion fuel burn efficiency ranges from 6% for the 400 kt W47 to 15% for the 1.1 megaton B28 (note that for very heavy cased 11-15 megaton yield tests at Castle, up to 40% fusion fuel burn efficiency was achieved), whereas the secondary stage ablative pusher fission efficiency ranged from 7% for a 1.1 inch thick natural uranium (99.3% U238) ablator to 20% for a 0.15 inch thick highly enriched oralloy (U235) ablator. From the brief description of the design evolution given by Dr Tom Ramos (Lawrence Livermore National Laboratory), it appears that when the x-ray channelling outer case thickness of the weapon is reduced to save weight, the duration of the x-ray coupling is reduced, so the dense metal pusher thickness must be reduced if the same compression factor (approximately 20) for the secondary stage is to be accomplished (lithium deuteride, being of low density, is far more compressable by a given pressure, than dense metal). In both examples, the secondary stage is physically a boosted fission stage. (If you are wondering why the hell the designers don't simply use a hollow core U235 bomb like Orange Herald instead of bothering with such inefficient x-ray coupled two-stage designs as these, the answer is straightforward: the risk of large fissile core meltdown by neutrons Moscow ABM defensive nuclear warheads, neutron bombs.)

The overall weight of the W47 was minimized by replacing the usual thick layer of U238 pusher with a very thin layer of fissile U235 (supposedly Teller's suggestion), which is more efficient for fission, but is limited by critical mass issues. The W47 used a 95% enriched Li6D cylinder with a 3.8mm thick U235 pusher; the B28 secondary was 36% enriched Li6D, with a very heavy 3cm thick U238 pusher. As shown below, it appears the B28 was related to the Los Alamos clean design of the TX21C tested as 95% clean 4.5 megatons Redwing-Navajo in 1956 and did not have a central fissile spark plug. From the declassified fallout composition, it is known the Los Alamos designers replaced the outer U238 pusher of Castle secondaries with lead in Navajo. Livermore did the same for their 85% clean 3.53 megatons Redwing-Zuni test, but Livermore left the central fission spark plug, which contributed 10% of its 15% fission yield, instead of removing the neutron shield, using foam channel filler for slowing down the x-ray compression, and thereby using primary stage neutrons to split lithium-6 giving tritium prior to compression. Our point is that Los Alamos got it wrong in sticking too conservatively to ideology: for clean weapons they should have got rid of the dense lead pusher and gone for John H. Nuckolls idea (also used by Fuchs in 1946 and the Russians in 1955 and 1958) of a low-density pusher for isentropic compression of low-density fusion fuel. This error is the reason why those early cleaner weapons were extremely heavy due to unnecessary 2" thick lead or tungsten pushers around the fusion fuel, which massively reduced their yield-to-weight ratios, so that LeMay rejected them!

Compare these data for the 20 inch diameter, 49 inch, 1600 lb, 1.1 megaton bomb B28 to the 18 inch diameter, 47 inch, 700 lb, 400 kt Mk47/W47 Polaris SLBM warhead (this is the correct yield for the first version of the W47 confirmed by UK data in Lorna Arnold Britain and the H-bomb 2001 and AB 16/3240; Wikipedia wrongly gives the 600 kt figure in Hansen, which was a speculation or a later upgrade). The key difference is that the W47 is much lighter, and thus suitable for the Polaris SLBM unlike the heavier, higher yield B28. Both B28 and W47 used cylindrical sausages, but they are very different in composition; the B28 used a huge mass of U238 in its ablative sausage outer shell or pusher, while the W47 used oralloy/U235 in the pusher. The table shows the total amounts of Pu, Oralloy (U235), Lithium-6 (excluding cheaper lithium-7, which is also present in varying amounts in different thermonuclear weapons), and tritium (which is used for boosting inside fissile material, essentially to reduce the amount of Pu and therefore the vulnerability of the weapon to Russian enhanced neutron ABM warhead meltdown). The B28 also has an external dense natural U (99.3% U238) "ablative pusher shell" whose mass is not listed in this table. The table shows that the 400 kt W47 Polaris SLBM warhead contains 60 kg of U235 (nearly as much as the 500 kt pure fission Mk18), which is in an ablative pusher shell around the lithium deuteride, so that the cylinder of neutron-absorbing lithium-6 deuteride within it keeps that mass of U235 subcritical, until compressed. So the 400 kt W47 contains far more Pu, U235, Li6 and T than the higher yield 1.1 megaton B28: this is the big $ price you pay for reducing the mass of the warhead; the total mass of the W47 is reduced to 44% of the mass of the B28, since the huge mass of cheap U238 pusher in the B28 is replaced by a smaller mass of U235, which is more efficient because (as Dr Carl F. Miller reveals in USNRDL-466, Table 6), about half of the neutrons hitting U238 don't cause fission but instead non-fission capture reactions which produce U239, plus the n,2n reaction that produces U237, emitting a lot of very low energy gamma rays in the fallout. For example, in the 1954 Romeo nuclear test (which, for simplicity, we quote since it used entirely natural LiD, with no expensive enrichment of the Li6 isotope whatsoever), the U238 jacket fission efficiency was reduced by capture as follows: 0.66 atom/fission of U239, 0.10 atom/fission of U237 and 0.23 atom/fission of U240 produced by fission, a total of 0.66 + 0.10 + 0.23 ~ 1 atom/fission, i.e. 50% fission in the U238 pusher, versus 50% non-fission neutron captures. So by using U235 in place of U238, you virtually eliminate the non-fission capture (see UK Atomic Weapons Establishment graph of fission and capture cross-sections for U235, shown below), which roughly halves the mass of the warhead, for a given fission yield. This same principle of using an outer U235/oralloy pusher instead of U238 to reduce mass - albeit with the secondary cylindrical "Sausage" shape now changed to a sphere - applies to today's miniaturised, high yield, low mass "MIRV" warheads. Just as the lower-yield W47 counter-intuitively used more expensive ingredients than the bulkier higher-yield B28, modern compact, high-yield oralloy-loaded warheads literally cost a bomb, just to keep the mass down! There is evidence Russia uses alternative ideas.

This is justified by the data given for a total U238 capture-to-fission ratio of 1 in the 11 megaton Romeo test and also the cross-sections for U235 capture and fission on the AWE graph for relevant neutron energy range of about 1-14 Mev. If half the neutrons are captured in U238 without fission, then the maximum fission yield you can possibly get from "x" kg of U238 pusher is HALF the energy obtained from 100% fission of "x" kg of U238. Since with U238 only about half the atoms can undergo fission by thermonuclear neutrons (because the other half undergo non-fission capture), the energy density (i.e., the Joules/kg produced by the fission explosion of the pusher) reached by an exploding U238 pusher is only half that reached by U235 (in which there is less non-fission capture of neutrons, which doubles the pusher mass without doubling the fission energy release). So a U235 pusher will reach twice the temperature of a U238 pusher, doubling its material heating of fusion fuel within, prolonging the fusion burn and thus increasing fusion burn efficiency. 10 MeV neutron energy is important since it allows for likely average scattering of 14.1 MeV D+T fusion neutrons and it is also the energy at which the most important capture reaction, the (n,2n) cross-section peaks for both U235 (peak of 0.88 barn at 10 Mev) and U238 (peak of 1.4 barns at 10 Mev). For 10 Mev neutrons, U235 and U238 have fission cross-sections of 1.8 and 1 barn, respectively. For 14 Mev neutrons, U238 has a (n,2n) cross section of 0.97 barn for U237 production. So ignoring non-fission captures, you need 1.8/1 = 1.8 times greater thickness of pusher for U238 than for U235, to achieve the same amount of fission. But this simple consideration ignores the x-ray ablation requirement of the explosing pusher, so there are several factors requiring detailed computer calculations, and/or nuclear testing.

Note: there is an extensive collection of declassified documents released after Chuck Hansen's final edition, Swords 2.0, which are now available at https://web.archive.org/web/*/http://www.nnsa.energy.gov/sites/default/files/nnsa/foiareadingroom/*, being an internet-archive back-up of a now-removed US Government Freedom of Information Act Reading Room. Unfortunately they were only identified by number sequence, not by report title or content, in that reeding room, and so failed to achieve wide attention when originally released! (This includes extensive "Family Committee" H-bomb documentation and many long-delayed FOIA requests submitted originally by Hansen, but not released in time for inclusion in Swords 2.0.) As the extract below - from declassified document RR00132 - shows, some declassified documents contained very detailed information or typewriter spaces that could only be filled by a single specific secret word (in this example, details of the W48 linear implosion tactical nuclear warhead, including the fact that it used PBX9404 plastic bonded explosive glued to the brittle beryllium neutron reflector around the plutonium core using Adiprene L100 adhesive!).

ABOVE: Declassified data on the radiation flow analysis for the 10 megaton Mike sausage: http://nnsa.energy.gov/sites/default/files/nnsa/foiareadingroom/RR00198.pdf Note that the simplistic "no-go theorem" given in this extract, against any effect from varying the temperature to help the radiation channelling, was later proved false by John H. Nuckolls (like Teller's anti-compression "no-go theorem" was later proved false), since lowered temperature delivers energy where it is needed while massively reducing radiation losses (which go as the fourth power of temperature/x-ray energy in kev).

ABOVE: Hans A. Bethe's disastrous back-of-the-envelope nonsense "non-go theorem" against lithium-7 fission into tritium by 14.1 Mev D+T neutrons in Bravo (which contained 40% lithium-6 and 60% lithium-7; unnecessarily enriched - at great expense and effort - from the natural 7.42% lithum-6 abundance). It was Bethe's nonsense "physics" speculation, unbacked by serious calculation, who caused Bravo to go off at 2.5 times the expected 6 megatons and therefore for the Japanese Lucky Dragon tuna trawler crew in the maximum fallout hotspot area 80 miles downwind to be contaminated by fallout, and also for Rongelap's people to be contaminated ("accidents" that inevitably kickstarted the originally limited early 1950s USSR funded Communist Party anti-nuclear deterrence movements in the West into mainstream media and thus politics). There was simply no solid basis for assuming that the highly penetrating 14.1 Mev neutrons would be significantly slowed by scattering in the fuel before hitting lithium-7 nuclei. Even teller's 1950 report LA-643 at page 17 estimated that in a fission-fusion Alarm Clock, the ratio of 14 Mev to 2.5 Mev neutrons was 0.7/0.2 = 3.5. Bethe's complacently bad guesswork-based physics also led to the EMP fiasco for high altitude bursts, after he failed to predict the geomagnetic field deflection of Compton electrons at high altitude in his secret report “Electromagnetic Signal Expected from High-Altitude Test”, Los Alamos report LA-2173, October 1957, Secret. He repeatedly caused nuclear weapons effects study disasters. For the true utility of lithium-7, which is actually BETTER than lithum-6 at tritium production when struck by 14.1 Mev D+T fusion neutrons, and its consequences for cheap isentropically compressed fusion capsules in Russian neutron bombs, please see my paper here which gives a graph of lithium isotopic cross section versus neutron energy, plus the results when Britain used cheap lithium-7 in Grapple Y to yield 3 megatons (having got lower yields with costly lithium-6 in previous tests!).

Update (15 Dec 2023): PDF uploaded of UK DAMAGE BY NUCLEAR WEAPONS (linked here on Internet Archive) - secret 1000 pages UK and USA nuclear weapon test effects analysis, and protective measures determined at those tests (not guesswork) relevant to escalation threats by Russia for EU invasion (linked here at wordpress) in response to Ukraine potentially joining the EU (this is now fully declassified without deletions, and in the UK National Archives at Kew):

Hiroshima and Nagasaki terrorist liars debunked by secret American government evidence that simple shelters worked, REPORT LINKED HERE (this was restricted from public view and never published by the American government, and Glasstone's lying Effects of Nuclear Weapons book reversed its evidence for propaganda purposes, a fact still covered by all the lying cold war pseudo "historians" today), Operation Hurricane 1952 declassified nuclear weapon test data (here), declassified UK nuclear tested shelter research reports (here), declassified EMP nuclear test research data (here), declassified clandestine nuclear bombs in ships attack on Liverpool study (here), declassified fallout decontamination study for UK recovery from nuclear attack (here), declassified Operation Buffalo surface burst and near surface burst fallout patterns, water decontamination, initial radiation shielding at Antler nuclear tests, and resuspension of deposited fallout dust into the air (inhalation hazard) at different British nuclear tests, plus Operation Totem nuclear tests crater region radiation surveys (here), declassified Operation Antler nuclear blast precursor waveforms (here), declassified Operation Buffalo nuclear blast precursor waveforms (here), declassified UK Atomic Weapons Establishment nuclear weapons effects symposium (here), and declassified UK Atomic Weapons Establishment paper on the gamma radiation versus time at Crossroads tests Able and Baker (here, paper by inventor of lenses in implosion weapons, James L. Tuck of the British Mission to Los Alamos and Operation Crossroads, clearly showing how initial gamma shielding in an air burst can be achieved with a few seconds warning and giving the much greater escape times available for residual radiation dose accumulations in an underwater burst; key anti-nuclear hysteria data kept covered up by Glasstone and the USA book Effects of Nuclear Weapons), and Penney and Hicks paper on the base surge contamination mechanism (here), and Russian nuclear warhead design evidence covered-up by both America and the so-called arms control and disarmament "experts" who always lie and distort the facts to suit their own agenda to try to start a nuclear war (linked here). If they wanted "peace" they'd support the proved facts, available on this blog nukegate.org since 2006, and seek international agreement to replace the incredible, NON-war deterring strategic nuclear weapons with safe tactical neutron warheads which collateral damage averting and invasion-deterring (thus war deterring in all its forms, not only nuclear), plus civil defence against all forms of collateral damage from war, which reduces escalation risks during terrorist actions, as proved in wars which don't escalate because of effective civil defence and credible deterrence (see below). Instead, they support policies designed to maximise civilian casualties and to deliberately escalate war, to profit "politically" from the disasters caused which they blame falsely on nuclear weapons, as if deterrence causes war! (Another lie believed by mad/evil/gullible mainstream media/political loons in "authority".) A good summary of the fake news basis of "escalation" blather against credible tactical nuclear deterrence of the invasions that set off wars is inadvertently provided by Lord David Owen's 2009 "Nuclear Papers" (Liverpool Uni Press), compiling his declassified nuclear disarmament propaganda reports written while he was UK Foreign Secretary 1977-9. It's all Carter era appeasement nonsense. For example, on pp158-8 he reprints his Top Secret 19 Dec 1978 "Future of the British Deterrent" report to the Prime Minister which states that "I am not convinced by the contention ... that the ability to destroy at least 10 major cities, or inflict damage on 30 major targets ... is the minimum criterion for a British deterrent." (He actually thinks this is too strong a deterrent, despite the fact it is incredible for the realpolitik tactics of dictators who make indirect provocations like invading their neighbours!) The reality Owens ignores is that Russia had and still has civil defence shelters and evacuation plans, so threatening some damage in retaliation is not a credible deterrent against the invasions that set off both world wars. On page 196, he gives a Secret 18 April 1978 paper stating that NATO then had 1000 nuclear artillery pieces (8" and 155mm), 200 Lance and Honest John tactical nuclear missile systems, 135 Pershing; all now long ago disarmed and destroyed while Russian now has over 2000 dedicated tactical nuclear weapons of high neutron output (unlike EM1's data for the low yield option of the multipurpose NATO B61). Owen proudly self-congratulates on his Brezhnev supporting anti-neutron bomb ranting 1978 book, "Human Rights", pp. 136-7. If Owen really wants "Human Rights", he needs to back the neutron bomb now to deter the dictatorships which destroy human rights! His 2009 "Nuclear Papers" at p287 gives the usual completely distorted analysis of the Cuban missiles crisis, claiming that despite the overwhelming American tactical and strategic nuclear superiority for credible deterrence in 1962, the world came "close" to a nuclear war. It's closer now, mate, when thanks to your propaganda we no longer have a credible deterrent, civil defence, tactical neutron warheads. Pathetic.

ABOVE secret reports on Australian-British nuclear test operations at Maralinga in 1956 and 1957, Buffalo and Antler, proved that even at 10 psi peak overpressure for the 15 kt Buffalo-1 shot, the dummy lying prone facing the blast was hardly moved due to the low cross-sectional area exposed to the blast winds, relative to standing dummies which were severely displaced and damaged. The value of trenches in protecting personnel against blast winds and radiation was also proved in tests (gamma radiation shielding of trenches had been proved at an earlier nuclear test in Australia, Operation Hurricane in 1952). (Antler report linked here; Buffalo report linked here.) This debunks the US Department of Defense models claiming that people will automatically be blown out of the upper floors of modern city buildings at very low pressures, and killed by the gravitational impact with the pavement below! In reality, tall buildings mutually shield one another from the blast winds, not to mention the radiation (proven in the latest post on this blog), and on seeing the flash most people will have time to lie down on typical surfaces like carpet which give a frictional resistance to displacement, ignored in fiddled models which assume surfaces have less friction than a skating rink; all of this was omitted from the American 1977 Glasstone and Dolan book "The Effects of Nuclear Weapons". As Tuck's paper below on the gamma radiation dose rate measurements on ships at Operation Crossroads, July 1946 nuclear tests proved, contrary to Glasstone and Dolan, scattered radiation contributions are small, so buildings or ships gun turrets provided excellent radiation "shadows" to protect personnel. This effect was then calculated by UK civil defence weapons effects expert Edward Leader-Williams in his paper presented at the UK's secret London Royal Society Symposium on the Physical Effects of Atomic Weapons, but the nuclear test data as always was excluded from the American Glasstone book published the next year, The Effects of Atomic Weapons in deference to lies about the effects in Hiroshima, including an "average" casualty curve which deliberately obfuscated huge differences in survival rates in different types of buildings and shelters, or simply in shadows!

Note: the DELFIC, SIMFIC and other computer predicted fallout area comparisons for the 110 kt Bikini Atoll Castle-Koon land surface burst nuclear test are false since the distance scale of Bikini Atoll is massively exaggerated on many maps, e.g. in the Secret January 1955 AFSWP "Fall-out Symposium", the Castle fallout report WT-915, and the fallout patterns compendium DASA-1251! The Western side of the Bikini Atoll reef is at 165.2 degrees East, while the most eastern island in the Bikini Atoll, Enyu, is at 165.567 degrees East: since there are 60 nautical miles per degree by definition, the width of Bikini Atoll is therefore (165.567-165.2)(60) = 22 nautical miles, approximately half the distance shown in the Castle-Koon fallout patterns. Since area is proportional to the square of the distance scale, this constitutes a serious exaggeration in fallout casualty calculations, before you get into the issue of the low energy (0.1-0.2 MeV) gamma rays from neutron induced Np239 and U237 in the fallout enhancing the protection factor of shelters (usually calculated assuming hard 1.17 and 1.33 MeV gamma rads from Co60), during the sheltering period of approximately 1-14 days after detonation.

"Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr Dulles calls 'brinkmanship'. This is a policy adopted from a sport ... called 'Chicken!' ... If one side is unwilling to risk global war, while the other side is willing to risk it, the side which is willing to run the risk will be victorious in all negotiations and will ultimately reduce the other side to complete impotence. 'Perhaps' - so the practical politician will argue - 'it might be ideally wise for the sane party to yield to the insane party in view of the dreadful nature of the alternative, but, whether wise or not, no proud nation will long acquiesce in such an ignominious role. We are, therefore, faced, quite inevitably, with the choice between brinkmanship and surrender." - Bertrand Russell, Common Sense and Nuclear Warfare, George Allen and Unwin, London, 1959, pp30-31.

Emphasis added. Note that Russell accepts lying about nuclear weapons just as gas weapons had been lied about in the 1920s-30s by "arms controllers" to start WWII, then he simply falls into the 1930s Cambridge Scientists Antiwar Group delusional propaganda fraud of assuming that any attempt to credibly deter fascism is immoral because it will automatically result in escalatory retaliation with Herman Goering's Luftwaffe drenching London with "overkill" by poison gas WMDs etc. In particular, he forgets that general disarmament pursued in the West until 1935 - when Baldwin suddenly announced that the Nazis had secretly produced a massive, unstoppable warmachine in two years - encouraged aggressors to first secretly rearm, then coerce and invade their neighbours while signing peace promises purely to buy more time for rearmament, until a world war resulted. Not exactly a great result for disarmament propaganda. So after obliterating what Reagan used to call (to the horror of commie "historians") the "true facts of history" from his mind, he advocates some compromise with the aggressors of the 30 September 1938 Munich Agreement peace-in-our-time sort, the historically proved sure fire way to really escalate a crisis into a major war by showing the green lamp to a loon to popular media acclaim and applause for a fairy tale utopian fantasy; just as the "principled" weak, rushed, imbecile withdrawl from Afghanistan in 2021 encouraged Putin to invade Ukraine in 2022, and also the green lamp for Hamas to invade Israel in 2023.

"... deterrence ... consists of threatening the enemy with thermonuclear retaliation should he act provocatively. ... If war is 'impossible', how can one threaten a possible aggressor with war? ... The danger, evoked by numerous critics, that such research will result in a sort of resigned expectation of the holocaust, seems a weak argument ... The classic theory of Clausewitz defines absolute victory in terms of disarmament of the enemy ... Today ... it will suffice to take away his means of retaliation to hold him at your mercy." - Raymond Aron, Introduction to Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, pp. 9-12. (This is the commie support for arms control and disarmament has achieved, precisely the weakening of the West to take away credible deterrence.)

"75 years ago, white slavery was rampant in England. ... it could not be talked about openly in Victorian England, moral standards as to the subjects of discussion made it difficult to arouse the community to necessary action. ... Victorian standards, besides perpetuating the white slave trade, intensified the damage ... Social inhibitions which reinforce natural tendencies to avoid thinking about unpleasant subjects are hardly uncommon. ... But when our reluctance to consider danger brings danger nearer, repression has gone too far. In 1960, I published a book that attempted to direct attention to the possibility of a thermonuclear war ... people are willing to argue that it is immoral to think and even more immoral to write in detail about having to fight ... like those ancient kings who punished messengers who brought them bad news. That did not change the news; it simply slowed up its delivery. On occasion it meant that the kings were ill informed and, lacking truth, made serious errors in judgement and strategy. ... We cannot wish them away. Nor should we overestimate and assume the worst is inevitable. This leads only to defeatism, inadequate preparations (because they seem useless), and pressures toward either preventative war or undue accommodation." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, pp. 17-19. (In the footnote on page 35, Kahn notes that original nuclear bullshitter, the 1950 creator of fake cobalt-60 doomsday bomb propaganda, Leo Szilard, was in the usual physics groupthink nutters club: "Szilard is probably being too respectful of his scientific colleagues who also seem to indulge in ad hominem arguments - especially when they are out of their technical specialty.")

"Ever since the catastropic and disillusioning experience of 1914-18, war has been unthinkable to most people in the West ... In December 1938, only 3 months after Munich, Lloyd's of London gave odds of 32 to 1 that there would be no war in 1939. On August 7, 1939, the London Daily Express reported the result of a poll of its European reporters. 10 out of 12 said, 'No war this year'. Hitler invaded Poland 3 weeks later." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, p. 39. (But as the invasion of Ukraine in 2022 proved, even the label "war" is now "controversial": the aggressor now simply declares they are on a special operation of unifying people under one flag to ensure peace! So the reason why there is war in Ukraine is that Ukraine is resisting. If it waved a white flag, as the entire arms control and disarmament lobby insists is the only sane response to a nuclear-armed aggressor, there would be "peace," albeit on Russia's terms: that's why they disarmed Ukraine in 1994. "Peace propaganda" of "disarmers"! Free decent people prefer to fight tyranny. But as Kahn states on pp. 7-9:

"Some, most notably [CND's pseudo-historian of arms race lying] A. J. P. Taylor, have even said that Hitler was not like Hitler, that further appeasement [not an all-out arms race as was needed but repeatedly rejected by Baldwin and Chamberlain until far too late; see discussion of this fact which is still deliberately ignored or onfuscated by "historians" of the A. J. P. Taylor biased anti-deterrence left wing type, in Slessor's The Central Blue, quoted on this blog] would have prevented World War II ... If someone says to you, 'One of us has to be reasonable and it is not going to be me, so it has to be you', he has a very effective bargaining advantage, particularly if he is armed with thermonuclear bombs [and you have damn all civil defense, ABM, or credible tactical deterrent]. If he can convince you he is stark, staring mad and if he has enough destructive power ... deterrence alone will not work. You must then give in or accept the possibility of being annihilated ... in the first instance if we fight and lose; in the second if we capitulate without fighting. ... We could still resist by other means ranging from passive resistance of the Gandhi type to the use of underground fighting and sabotage. All of these alternatives might be of doubtful effectiveness against [the Gulag system, KGB/FSB torture camps or Siberian salt mines of] a ruthless dictatorship."

Sometimes people complain that Hitler and the most destructive and costly war and only nuclear war of history, WWII, is given undue attention. But WWII is a good analogy to the danger precisely because of the lying WMD gas war propaganda-based disarmament of the West which allowed the war, because of the attacks by Hitler's fans on civil defense in the West to make even the token rearmament after 1935 ineffective as a credible deterrent, and because Hitler has mirrors in Alexander the Great, Attila the Hun, Ghengis Khan, Tamerlane, Napoleon and Stalin. Kahn explains on p. 173: "Because history has a way of being more imaginative and complex than even the most imaginative and intelligent analysts, historical examples often provide better scenarios than artificial ones, even though they may be no more directly applicable to current equipment, postures, and political situations than the fictional plot of the scenario. Recent history can be especially useful.")

"One type of war resulting at least partly from deliberate calculation could occur in the process of escalation. For example, suppose the Soviets attacked Europe, relying upon our fear of their reprisal to deter a strategic attack by us; we might be deterred enough to pause, but we might evacuate our cities during this pause in the hope we could thereby convince the Soviets we meant business. If the Soviets did not back down, but continued their attack upon Europe, we might decide that we would be less badly off if we proceeded ... The damage we would receive in return would then be considerably reduced, compared with what we would have suffered had we not evacuated. We might well decide at such a time that we would be better off to attack the Soviets and accept a retalitory blow at our dispersed population, rather than let Europe be occupied, and so be forced to accept the penalty of living in the hostile and dangerous world that would follow." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, pp. 51-2.

"We must recognise that the stability we want in a system is more than just stability against accidental war or even against an attack by the enemy. We also want stability against extreme provocation [e.g. invasion of allies, which then escalates as per invasion of Belgium 1914, or Poland 1939]." - Herman Kahn's 1962 Thinking About the Unthinkable, Weidenfield and Nicholson, London, p. 53(footnote).

Note: this 1962 book should not be confused with Kahn's 1984 "updated" Thinking About the Unthinkable in the 1980s, which omits the best material in the 1962 edition (in the same way that the 1977 edition of The Effects of Nuclear Weapons omits the entire civil defense chapter which was the one decent thing in the 1957 and 1962/4 editions!) and thus shows a reversion to the less readable and less helpful style of his 1960 On Thermonuclear War, which severely fragmented and jumbled up all the key arguments making it easy for critics to misquote or quote out of context. For example, Kahn's 1984 "updated" book starts on the first page of the first chapter with the correct assertion that Johnathan Schell's Fate of the Earth is nonsense, but doesn't say why it's nonsense, and you have to read through to the final chapter - pages 207-8 of chapter 10 - to find Kahn writing in the most vague way possible, without a single specific example, that Schell is wrong because of "substantive inadequacies and inaccuracies", without listing a single example such as Schell's lying that the 1954 Bravo nuclear test blinded everyone well beyond the range of Rongelap, and that it was impossible to easily shield the radiation from the fallout or evacuate the area until it decays, which Schell falsely attributed to Glasstone and Dolan's nonsense in the 1977 Effects of Nuclear Weapons! Kahn eventually in the footnote on page 208 refers readers to an out-of-print article for facts: "These criticisms are elaborated in my review of The Fate of the Earth, see 'Refusing to Think About the Unthinkable', Fortune, June 28, 1982, pp. 113-6. Kahn does the same for civil defense in the 1984 book, referring in such general, imprecise and vague terms to Russian civil defence, with no specific data, that it is a waste of time, apart possibly one half-baked sentence on page 177: "Variations in the total megatonnage, somewhat surprisingly, do not seem to affect the toll nearly as much as variations in the targetting or the type of weapon bursts." Kahn on page 71 quotes an exchange between himself and Senator Proxmire during the US Congressional Hearings of the Joint Committee on Defense Production, Civil preparedness and limited nuclear war where on page 55 of the hearings, Senator Proxmire alleges America would escalate a limited conflict to an all-out war because: "The strategic value and military value of destroying cities in the Soviet Union would be very great." Kahn responded: "No American President is likely to do that, no matter what the provocation." Nuclear war will be limited, according to Herman Kahn's analysis, despite the bullshit fron nutters to the contrary.

Kahn on page 101 of Thinking About the Unthinkable in the 1980s correctly and accurately condemns President Carter's 1979 State of the Union Address, which claimed falsely that just a single American nuclear submarine is required by America and has an "overwhelming" deterrent against "every large and medium-sized city in the Soviet Union". Carter ignored Russian retaliation on cities if you bomb theirs: America has avoided the intense Russian protection efforts that make the Russian nuclear threat credible, namely civil defense shelters and evacuation plans, and also the realpolitik of deterrence of world wars, which so far have only been triggered due to invasions of third parties (Belgium '14, Poland '39). Did America strategically nuke every city in Russia when it invaded Ukraine in 2022? No, debunking Proxmire and the entire Western pro-Russian "automatic escalation" propaganda lobby, and it didn't even have tactical neutron bombs to help deter the Russians like Reagan in the 1980s, because in the 1990s America had ignored Kahn's argument, and went in for MINIMAL deterrence of the least credible sort (abolishing the invasion-deterring dedicated neutron tactical nuclear stockpile entirely; the following quotation is from p101 of Kahn's Thinking About the Unthinkable in the 1980s):

"Minimum deterrence, or any predicated on an escessive emphasis on the inevitably of mutual homocide, is both misleading and dangerous. ... MAD principles can promote provocation - e.g. Munich-type blackmail on an ally. Hitler, for example, did not threaten to attack France or England - only Austria, Czechoslovakia, and Poland. It was the French and the British who finally had to threaten all-out war [they could only do this after rearmament and building shelters and gas masks to reduce the risk of reprisals in city bombing, which gave more time for Germany to prepare since it was rearming faster than France and Britain which still desperately counted on appeasement and peace treaties and feared provoking a war by an arms-race due to endless lying propaganda from Lord Grey that his failure to deter war in 1914 had been due to an arms-race rather than the incompetence of the procrastination of his anti-war Liberal Party colleagues in the Cabinet] - a move they would not and could not have made if the notion of a balance of terror between themselves and Germany had been completely accepted. As it was, the British and French were most reluctant to go to war; from 1933 to 1939 Hitler exploited that reluctance. Both nations [France and Britain] were terrified by the so-called 'knockout blow', a German maneuver that would blanket their capitals with poison gas ... The paralyzing effect of this fear prevented them from going to war ... and gave the Germans the freedom to march into the Ruhr, to form the Anschluss with Austria, to force the humiliating Munich appeasement (with the justification of 'peace in our time'), and to take other aggressive actions [e.g. against the Jews in the Nuremberg Laws, Kristallnacht, etc.] ... If the USSR were sufficiently prepared in the event a war did occur, only the capitalists would be destroyed. The Soviets would survive ... that would more than justify whatever sacrifice and destruction had taken place.

"This view seems to prevail in the Soviet military and the Politburo even to the present day. It is almost certain, despite several public denials, that Soviet military preparations are based on war-fighting, rather than on deterrence-only concepts and doctrines..." - Herman Kahn, Thinking About the Unthinkable in the 1980s, 1984, pages 101-102.

Kahn adds, in his footnote on p111, that "Richard Betts has documented numerous historical cases in which attackers weakened their opponents defenses through the employment of unanticipated tactics. These include: rapid changes in tactics per se, false alarms and fluctuating preparations for war ... doctrinal innovations to gain surprise. ... This is exactly the kind of thing which is likely to surprise those who subscribe to MAD theories. Those who see a need for war-fighting capabilities expect the other side to try to be creative and use tactical innovations such as coercion and blackmail, technological surprises, or clever tactics on 'leverage' targets, such as command and control installations. If he is to adhere to a total reliance on MAD, the MADvocate has to ignore these possibilities." See Richard Betts, "Surprise Despite Warning: Why Sudden Attacks Succeed", Political Science Quarterly, Winter 1980-81, pp. 551-572.)

Compare two situations: (1) Putin explodes a 50 megaton nuclear "test" of the warhead for his new nuclear reactor powered torpedo, Poseidon, a revamped 1961 Tsar Bomba, or detonates a high-altitude nuclear EMP "test" over neutral waters but within the thousands of miles range of USA or UK territory; (2) Putin invades Poland using purely conventional weapons. Our point here is that both nuclear AND conventional weapons trigger nuclear threats and the risk of nuclear escalation, as indeed they have done (for Putin's nuclear threats scroll down to videos with translations below). So the fashionable CND style concept that only nuclear weapons can trigger nuclear escalation is bullshit, and is designed to help Russia start and win WWIII to produce a world government, by getting us to undertake further unilateral (not multilateral) disarmament, just as evolved in the 1930s, setting the scene for WWII. Japan for example did not have nuclear weapons in August 1945, yet triggered not just tactical nuclear war (both cities had some military bases and munitions factories, as well as enormous numbers of civilians), and the decision to attack cities rather than just "test" weapons obove Tokyo bay as Teller demanded but Oppenheimer rejected (for maximum impact with a very small supply of nuclear weapons) showed some strategic nuclear war thinking. Truman was escalating to try to shock Japan into rapid surrender emotionally (many cities in Japan had already been burned out in conventional incendiary air raids, and the two nuclear attacks while horrible for civilians in those cities contributed only a fraction of the millions killed in WWII, despite anti-nuclear propaganda lies to the contrary). Truman's approach escalating to win is the opposite of the "Minimax game theory" (von Neumann's maths and Thomas Schelling's propaganda) gradual escalation approach that's currently the basis of nuclear deterrence planning despite its failure wherever it has been tried (Vietnam, Afghanistan, etc). Gradual escalation is supposed to minimise the maximum possible risk (hence "minimax" name), but it guarantees failure in the real world (unlike rule abided games) by maximising the build up of resentment. E.g. Schelling/Minimax say that if you gradually napalm civilians day after day (because they are the unprotected human shields used by terrorists/insurgents; the Vietcong are hiding in underground tunnels, exactly like Hamas today, and the Putin regime's metro 2 shelter tunnels under Russia) you somehow "punish the enemy" (although they don't give a toss about the lives of kids which is why you're fighting them!) and force them to negotiate for peace in good faith, then you can pose for photos with them sharing a glass of champagne and there is "world peace". That's a popular fairy tale, like Marxist mythology.

Once you grasp this fact, that nuclear weapons have been and will again be "used" explosively without automatic escalation, for example provocative testing as per the 1961 Russian 50 megaton bomb test, or the 1962 high altitude EMP bursts, you should be able to grasp the fact that the "escalation" deception used to dismiss civil defense and tactical nuclear deterrence against limited nuclear war, is fake news from Russian fellow-travellers like Corbyn. Once you assign a non-unity probability to "escalation", you're into conventional war territory: if you fight a conventional war, it can "escalate" to nuclear war as on 6 August 1945. Japan did not avoid nuclear attack by not having nuclear weapons on 6 August 1945. If it had nuclear weapons ready to be delivered, a very persuasive argument could be made that unless Truman wanted to invite retaliation, World War II would have remained strategically non-nuclear: no net strategic advantage would have been achieved by nuclear city bombing so only war-ending tactical nuclear threats could have prevailed in practice. But try explaining this to the groupthink pseudosocialist bigoted mass murderers who permeate fake physics with crap; it's no easier to explain to them the origins of particle masses or even dark energy/gravitation; in both cases groupthink lying hogwash persists because statements of proved facts are hated and rejected if them debunk religious style fairy tales the mass media loves. There were plenty of people warning that mass media gas war fear mongering was disguised Nazi supporting propaganda in the 1930s, but the public listened to that crap then just as it accepted the "eugenics" (anti-diversity evolution crap of Sir Galton, cousin of Darwin) basis for Hitler's Mein Kampf without question, just as they accepted the lying propaganda from the UK "Cambridge Scientists Anti-War Group" which like CND and all other arms control and disarmament lobbies supporting terrorist states today, did more than even Hitler to deliberately lay the foundations for the Holocaust and World War II, while never being criticised in the UK media! Thus, it's surely time for people to oppose evil lying on civil defence to save lives in all disasters from storms to conventional war, to collateral damage risks in nuclear terrorism by mad enemies. At some point, the majority has to decide to either defend itself honestly and decently against barbarism, or be consumed by it as a price for believing bullshit. It's time for decent people to oppose lying evil regarding the necessity to have credible tactical (not incredible strategic) nuclear weapons, as Oppenheimer called for in his 1951 speech, to deter invasions.

Democracy can't function when secrecy is used to deliberately cover-up vital data from viewing by Joe Public. Secrecy doesn't protect you from enemies who independently develop weapons in secret, or who spy from inside your laboratories:

"The United States and Great Britain resumed testing in 1962, and we spared no effort trying to find out what they were up to. I attended several meetings on that subject. An episode related to those meetings comes to mind ... Once we were shown photographs of some documents ... the photographer had been rushed. Mixed in with the photocopies was a single, terribly crumpled original. I innocently asked why, and was told that it had been concealed in panties. Another time ... questions were asked along the following lines: What data about American weapons would be most useful for your work and for planning military technology in general?"

- Andrei Sakharov, Memoirs, Hutchinson, London, 1990, pp225-6.

ABOVE: The British government has now declassified detailed summary reports giving secret original nuclear test data on the EMP (electromagnetic pulse) damage due to numerous nuclear weapons, data which is still being kept under wraps in America since it hasn't been superseded because Western atmospheric nuclear tests were stopped late in 1962 and never resumed - even though the Russians have even more extensive data - completely debunking Glasstone and Dolan's disarmament propaganda nonsense in the 1962, 1964 and 1977 Effects of Nuclear Weapons which ignores EMP piped far away from low altitude nuclear tests by power and communications cables and falsely claims instead that such detonations don't produce EMP damage outside the 2psi blast radius! For a discussion of the new data and also a link to the full 200+ pages version (in addition to useful data, inevitably like all official reports it also contains a lot of "fluff" padding), please see the other (physics) site: https://nige.wordpress.com/2023/09/12/secret-emp-effects-of-american-nuclear-tests-finally-declassified-by-the-uk-and-at-uk-national-archives/ (by contrast, this "blogspot" uses old non-smartphone proof coding, no longer properly indexed any long longer by "google's smartphone bot"). As long ago as 1984, Herman Kahn argued on page 112 of his book Thinking About the Unthinkable in the 1980s: "The effects of an EMP attack are simply not well understood [in the West, where long powerlines were never exposed on high altitude nuclear tests, unlike the Russian's 1962 Operation K, so MHD-EMP or E3 damage wasn't even mentioned in the 1977 Glasstone and Dolan Effects of Nuclear Weapons], but the Soviets seem to know - or think they know - more than we do."

BELOW: declassified British nuclear war planning blast survival data showing that even without special Morrison table shelters, the American assumption that nobody can survive in a demolished house is false, based on detailed WWII British data (the majority of people in houses flattened within 77 ft from V1 Nazi cruise missiles survived!), and secret American reports (contradicting their unclassified propaganda) proved that blast survival occurred at 16 psi overpressure in Hiroshima's houses, e.g. see limited distribution Dirkwood corp DC-P-1060 for Hiroshima, also the secret 1972 Capabilities of Nuclear Weapons DNA-EM-1 table 10-1, and WWII report RC-450 table 8.2, p145 (for determining survival of people sheltered in brick houses, the WWII A, B, C, and D damage versus casualty data from V1 blast was correlated to similar damage from nuclear blast as given Glasstone's 1957 Effects of Nuclear Weapons page 249, Fig. 6.41a, and page 109 Fig. 3.94a, which show that A, B, C, and D damage to brick houses from nuclear weapons occur at peak overpressures of 9, 6, 3 and 0.5 psi, respectively; the longer blast from higher yields blows the debris over a wider area, reducing the load per unit area falling on to people sheltered under tables etc), and the declassified UK government assessment of nuclear terrorist attack on a port or harbour, as well as the confidential classified UK Government analysis of the economic and social effects from WWII bombing (e.g. the recovery times for areas as a function of percentage of houses destroyed):

Unofficial Russian video on the secret Russian nuclear shelters from Russian Urban Exploration, titled "Проникли на секретный Спецобъект Метро!" = "We infiltrated a secret special facility of the Metro!":

ABOVE: Moscow Metro and Metro-2 (secret nuclear subway) horizonially swinging blast doors take only 70 seconds to shut, whereas their vertically rising blast doors take 160 seconds to shut; both times are however far shorter than the arrival time of Western ICBMs or even SLBMs which take 15-30 minutes by which time the Russian shelters are sealed from blast and radiation! In times of nuclear crisis, Russia planned to evacuate from cities those who could not be sheltered, and for the remainder to be based in shelters (similarly to the WWII British situation, when people slept in shelters of one kind or another when there was a large risk of being bombed without notice, particularly in supersonic V2 missile attacks where little warning time was available).

fCo2fnIEVVDG-6K0Kwk9cik87id46Qw5l0qJSBtQ/s1600/Moscow%20bomb%20shelter6.png"/>

ABOVE: originally SECRET diagrams showing the immense casualty reductions for simple shelters and local (not long distance as in 1939) evacuation, from a UK Home Office Scientific Advisers’ Branch report CD/SA 72 (UK National Archives document reference HO 225/72), “Casualty estimates for ground burst 10 megaton bombs”, which exposed the truth behind UK Cold War civil defence (contrary to Russian propaganda against UK defence, which still falsely claims there was no scientific basis for anything, playing on the fact the data was classified SECRET). Evacuation plus shelter eliminates huge casualties for limited attacks; notice that for the 10 megaton bombs (more than 20 times the typical yield of today’s MIRV compact warheads!), you need 20 weapons, i.e. a total of 10 x 20 = 200 megatons, for 1 million killed, if civil defence is in place for 45% of people to evacuate a city and the rest to take shelter. Under civil defence, therefore, you get 1 million killed per 200 megatons. This proves that civil defence work to make deterrence more credible in Russian eyes. For a discussion of the anti-civil defence propaganda scam in the West led by Russian agents for Russian advantage in the new cold war, just read posts on this blog started in 2006 when Putin's influence became clear. You can read the full PDF by clicking the link here. Or see the files here.

ABOVE: the originally CONFIDENTIAL classified document chapters of Dr D.G. Christopherson’s “Structural Defence 1945, RC450”, giving low cost UK WWII shelter effectiveness data, which should also have been published to prove the validity of civil defence countermeasures in making deterrence of future war more credible by allowing survival of “demonstration” strikes and “nuclear accidents / limited wars” (it’s no use having weapons and no civil defence, so you can’t deter aggressors, the disaster of Munich appeasement giving Hitler a green light on 30 September 1938, when Anderson shelters were only issued the next year, 1939!). For the original WWII UK Government low cost sheltering instruction books issued to the public (for a small charge!) please click here (we have uploaded them to internet archive), and please click here for further evidence for the effectiveness of indoor shelters during WWII from Morrison shelter inventor Baker's analysis, please click here (he titled his book about WWII shelters "Enterprise versus Bureaucracy" which tells you all you need to know about the problems his successful innovations in shelter design experienced; his revolutionary concept was that the shelter should be damaged to protect the people inside because of the vast energy absorption soaked up in the plastic deformation of steel - something which naive fools can never appreciate - by analogy, if your car bumper is perfectly intact after impact you're unlikely to be because it has not absorbed the impact energy which has been passed on to you!). We have also placed useful declassified UK government nuclear war survival information on internet archive here and here. There is also a demonstration of how proof-tested WWII shelters were tested in 1950s nuclear weapon trials and adapted for use in Cold War nuclear civil defence, here, thus permanently debunking the somewhat pro-dictatorship/anti-deterrence Jeremy Corbyn/Matthew Grant/Duncan Campbell anti-civil defence propaganda rants which pretend to to based on reality, but obviously just ignore the hard, yet secret, nuclear testing facts upon which UK government civil defence was based as my father (a Civil Defence Corps instructor) explained here back in 2006. The reality is that the media follows herd fashion to sell paper/airtime; it doesn't lead it. This is why it backed Nazi appeasement (cheering Chamberlain's 1938 handshakes with Hitler for instance) and only switched tune when it was too late to deter Nazi aggression in 1939; it made the most money that way. We have to face the facts!

NUKEGATE - Western tactical neutron bombs were disarmed after Russian propaganda lie. Russia now has over 2000... "Disarmament and arms control" charlatans, quacks, cranks, liars, mass murdering Russian affiliates, and evil genocidal Marxist media exposed for what it is, what it was in the 1930s when it enabled Hitler to murder tens of millions in war. Glasstone's and Dolan's 1977 Effects of Nuclear Weapons deceptions totally disproved. Professor Brian Martin, TRUTH TACTICS, 2021 (pp45-50): "In trying to learn from scientific publications, trust remains crucial. The role of trust is epitomised by Glasstone’s book The Effects of Atomic Weapons. Glasstone was not the author; he was the editor. The book is a compilation of information based on the work of numerous contributors. For me, the question was, should I trust this information? Was there some reason why the editors or authors would present fraudulent information, be subject to conflicts of interest or otherwise be biased? ... if anything, the authors would presumably want to overestimate rather than underestimate the dangers ... Of special interest would be anyone who disagreed with the data, calculations or findings in Glasstone. But I couldn’t find any criticisms. The Effects of Nuclear Weapons was treated as the definitive source, and other treatments were compatible with it. ... One potent influence is called confirmation bias, which is the tendency to look for information that supports current beliefs and dismiss or counter contrary information. The implication is that changing one’s views can be difficult due to mental commitments. To this can be added various forms of bias, interpersonal influences such as wanting to maintain relationships, overconfidence in one’s knowledge, desires to appear smart, not wanting to admit being mistaken, and career impacts of having particular beliefs. It is difficult to assess the role of these influences on yourself. "

Honest Effects of Nuclear Weapons!

ABOVE (VIDEO CLIP): Russian State TV Channel 1 war inurer and enabler, NOT MERELY MAKING "INCREDIBLE BLUFF THREATS THAT WE MUST ALL LAUGH AT AND IGNORE LIKE DR GOEBBELS THREATS TO GAS JEWS AND START A WORLD WAR" AS ALMOST ALL THE BBC SCHOOL OF "JOURNALISM" (to which we don't exactly belong!) LIARS CLAIM, but instead preparing Russians mentally for nuclear war (they already have nuclear shelters and a new Putin-era tactical nuclear war civil defense manual from 2014, linked and discussed in blog posts on the archive above), arguing for use of nuclear weapons in Ukraine war in 2023: "We should not be afraid of what it is unnecessary to be afraid of. We need to win. That is all. We have to achieve this with the means we have, with the weapons we have. I would like to remind you that a nuclear weapon is not just a bomb; it is the heritage of the whole Russian people, suffered through the hardest times. It is our heritage. And we have the right to use it to defend our homeland [does he mean the liberated components of the USSR that gained freedom in 1992?]. Changing the [nuclear use] doctrine is just a piece of paper, but it is worth making a decision."

NOTE: THIS IS NOT ENGLISH LANGUAGE "PROPAGANDA" SOLELY ADDRESSED AS A "BLUFF" TO UK AND USA GOV BIGOTED CHARLATANS (those who have framed photos of hitler, stalin, chamberlain, baldwin, lloyd george, eisenhower, et al., on their office walls), BUT ADDRESSED AT MAKING RUSSIAN FOLK PARTY TO THE NEED FOR PUTIN TO START A THIRD WORLD WAR! Duh!!!!! SURE, PUTIN COULD PRESS THE BUTTON NOW, BUT THAT IS NOT THE RUSSIAN WAY, ANY MORE THAN HITLER SET OFF WWII BY DIRECTLY BOMBING LONDON! HE DIDN'T. THESE PEOPLE WANT TO CONTROL HISTORY, TO GO DOWN THE NEXT "PUTIN THE GREAT". THEY WANT TO GET THEIR PEOPLE, AND CHINA, NORTH KOREA, IRAN, ET Al. AS ALLIES, BY APPEARING TO BE DEFENDING RATIONALITY AND LIBERTY AGAINST WAR MONGERING WESTERN IMPERIALISM. For the KGB mindset here, please read Chapman Pincher's book "The Secret offensive" and Paul Mercer's "Peace of the Dead - The Truth Behind the Nuclear Disarmers". Please note that the link to the analysis of the secret USSBS report 92, The Effects of the Atomic Bomb on Hiroshima, Japan (which google fails to appreciate is a report with the OPPOSITE conclusions to the lying unclassified reports and Glasstone's book on fire, is on internet archive in the PDF documents list at the page "The effects of the atomic bomb on Hiroshima, Japan" (the secret report 92 of the USSBS, not the lying unclassified version or the Glasstone book series). If you don't like the plain layout of this blog, you can change it into a "fashionable" one with smaller photos you can't read by adding ?m=1 to the end of the URL, e.g. https://glasstone.blogspot.com/2022/02/analogy-of-1938-munich-crisis-and.html?m=1

PLEASE BEAR WITH US - THIS SITE WAS DEVELOPED IN 2006 BEFORE GOOGLE SMARTPHONE BOT CACHING (GOOGLE BOTS CAN'T INDEX THIS FORMAT ANYMORE AS IT IS SIMPLY UNSUITABLE TO SMARTPHONES WHICH DIDN'T EXIST BACK IN 2006 - WILL MOVE TO A NEW DOMAIN SOON TO OVERCOME THIS. (HOPEFULLY THE TEXT WILL ALSO BE EDITED AND RE-WRITTEN TO TAKE OUT TYPING ERRORS AND DEAD LINKS DATING BACK TO 2006 WHEN THE BLOG BEGAN - A LOT HAS CHANGED SINCE THEN!)

Glasstone's Effects of Nuclear Weapons exaggerations completely undermine credible deterrence of war: Glasstone exaggerates urban "strategic" nuclear weapons effects by using effects data taken from unobstructed terrain (without the concrete jungle shielding of blast winds and radiation by cities!), and omits the most vital uses and most vital effects of nuclear weapons: to DETER world war credibly by negating the concentrations of force used to invade Belgium, 1914 (thus WWI) and Poland (WWII). The facts from Hiroshima and Nagasaki for the shielding of blast and radiation effects by modern concrete buildings in the credible nuclear deterrence of invasions (click here for data) which - unlike the countervalue drivel that failed to prevent WW2 costing millions of human lives - worked in the Cold War despite the Western media's obsession with treating as Gospel truth the lying anti-nuclear propaganda from Russia's World Peace Council and its allies (intended to make the West disarm to allow Russian invasions without opposition, as worked in Ukraine recently)! If we have credible W54's and W79's tactical nukes to deter invasions as used to Cold War, pro Russian World Peace Council inspired propaganda says: "if you use those, we'll bomb your cities", but they can bomb our cities with nuclear if we use conventional weapons, or even if we fart, if they want - we don't actually control what thugs in dictatorships - it is like saying Hitler had 12,000 tons of tabun nerve agent by 1945, so lying we had to surrender for fear of it. Actually, he had to blow his brains out because he had an incredible deterrent, as retaliation risk plus defence (masks) negated it!

Credible deterrence necessitates simple, effective protection against concentrated and dispersed invasions and bombing. The facts can debunk massively inaccurate, deliberately misleading CND "disarm or be annihilated" pro-dictatorship ("communism" scam) political anti-nuclear deterrence dogma. Hiroshima and Nagasaki anti-nuclear propaganda effects lies on blast and radiation for modern concrete cities is debunked by solid factual evidence kept from public sight for political reasons by the Marx-media which is not opposed by the remainder of the media, and the completely fake "nuclear effects data" sneaks into "established pseudo-wisdom" by the back-door. Another trick is hate attacks on anyone telling the truth: this is a repeat of lies from Nobel Peace Prize winner Angell and pals before WWI (when long-"outlawed" gas was used by all sides, contrary to claims that paper agreements had "banned" it somehow) and WWII (when gas bombing lies prior to the war by Angell, Noel-Baker, Joad and others were used as an excuse to "make peace deals" with the Nazis, again, not worth the paper they were printed on). Mathematically, the subset of all States which keep agreements (disarmament and arms control, for instance) is identical to the subset of all States which are stable Democracies (i.e., tolerating dissent for the past several years), but this subset is - as Dr Spencer Weart's statistical evidence of war proves in his book Never at War: Why Democracies Won't Fight One Another - not the bloody war problem! Because none of the disarmaments grasp set theory, or bother to read Dr Weart's book, they can never understand that disarmament of Democracies doesn't cause peace but causes millions of deaths.

PLEASE CLICK HERE for the truth from Hiroshima and Nagasaki for the shielding of blast and radiation effects by modern concrete buildings in the credible nuclear deterrence of invasions which - unlike the countervalue drivel that failed to prevent WW2 costing millions of human lives - worked in the Cold War despite the Western media's obsession with treating as Gospel truth the lying anti-nuclear propaganda from Russia's World Peace Council and its allies (intended to make the West disarm to allow Russian invasions without opposition, as worked in Ukraine recently)! Realistic effects and credible nuclear weapon capabilities are needed for deterring or stopping aggressive invasions and attacks which could escalate into major conventional or nuclear wars. Credible deterrence is through simple, effective protection against concentrated and dispersed invasions and aerial attacks, debunking inaccurate, misleading CND "disarm or be annihilated" left political anti-nuclear deterrence dogma. Hiroshima and Nagasaki anti-nuclear propaganda effects lies on blast and radiation for modern concrete cities is debunked by solid factual evidence kept from public sight for political reasons by the Marx-media.

Glasstone's and Nukemap's fake Effects of Nuclear Weapons effects data for unobstructed deserts, rather than realistic blast and radiation shielding concrete jungles which mitigate countervalue damage as proved in Hiroshima and Nagasaki by Penney and Stanbury, undermine credible world war deterrence just as Philip Noel-Baker's 1927 BBC radio propaganda on gas war knock-out blow lies were used by Nazi propaganda distributing "pacifist disarmers" to undermine deterrence of Hitler's war, murdering tens of millions deliberately through lies (e.g. effective gas masks don't exist) that were easy to disprove, but supported by the mainstream fascist leaning press in the UK. There is not just one country, Russia, which could trigger WW3, because we know from history that the world forms alliances once a major war breaks out, apart from a few traditional neutral countries like Ireland and Switzerland, so a major US-China war over Taiwan could draw in support from Russia and North Korea, just as the present Russian invasion and war against Ukraine has drawn in Iranian munitions support for Russia. So it is almost certain that a future East-vs-West world war will involve an alliance of Russia-China-North Korea-Iran fighting on multiple fronts, with nuclear weapons being used carefully for military purposes (not in the imaginary 1930s massive "knockout blow" gas/incendiary/high explosive raids against cities that was used by the UK media to scare the public into appeasing Hitler and thus enabling him to trigger world war; Chamberlain had read Mein Kampf and crazily approved Hitler's plans to exterminate Jews and invade Russia starting a major war, a fact censored out of biased propaganda hailing Chamberlain as a peacemaker).

Realistic effects and credible nuclear weapons capabilities are VITAL for deterring or stopping aggressive invasions and attacks which could escalate into major conventional or nuclear wars debunk Marx media propagandarists who obfuscate because they don't want you to know the truth, so activism is needed to get the message out against lying frauds and open fascists in the Russian supporting Marx mass media, which sadly includes government officialdom (still infiltrated by reds under beds, sorry to Joe MaCarthy haters, but admit it as a hard fact that nuclear bomb labs in the West openly support Russian fascist mass murders; I PRAY THIS WILL SOON CHANGE!).

ABOVE: Tom Ramos at Lawrence Livermore National Laboratory (quoted at length on the development details of compact MIRV nuclear warhead designs in the latest post on this blog) explains how the brilliant small size primary stage, the Robin, was developed and properly proof-tested in time to act as the primary stage for a compact thermonuclear warhead to deter Russia in the 1st Cold War, something now made impossible due to Russia's World Peace Council propaganda campaigns. (Note that Ramos has a new book published, called From Berkeley to Berlin: How the Rad Lab Helped Avert Nuclear War which describes in detail in chapter 13, "First the Flute and Then the Robin", how caring, dedicated nuclear weapons physicists in the 1950s and 1960s actually remembered the lesson of disarmament disaster in the 1930s, and so WORKED HARD to develop the "Flute" secondary and the "Robin" primary to enable a compact, light thermonuclear warhead to help deter WWIII! What a difference to today, when all we hear from such "weaponeers" now is evil lying about nuclear weapons effects on cities and against Western civil defence and against credible deterrence on behalf of the enemy.)

ABOVE: Star Wars filmmaker Peter Kuran has at last released his lengthy (90 minutes) documentary on The neutron bomb. Unfortunately, it is not yet being widely screened in cinemas or on DVD Blu Ray disc, so you have to stream it (if you have fast broadband internet hooked up to a decent telly). At least Peter managed to interview Samuel Cohen, who developed the neutron bomb out of the cleaner Livermore devices Dove and Starling in 1958 (Ramos says Livermore's director, who invented a wetsuit, is now trying to say Cohen stole the neutron bomb idea from him! Not so, as RAND colleague and 1993 Effects Manual EM-1 editor Dr Harold L. Brode explains in his recent brilliant book on the history of nuclear weapons in the 1st Cold War (reviewed in a post on this blog in detail) that Cohen was after the neutron bomb for many years before Livermore was even built as a rival to Los Alamos. Cohen had been into neutrons when working in the Los Alamos Efficiency Group of the Manhattan project on the very first nuclear weapons, used with neutron effects on people by Truman, back in 1945 to end a bloody war while the Livermore director was in short pants.)

For the true effects in modern city concrete buildings in Hiroshima and Nagasaki, disproving the popular lies for nudes in open deserts used as the basis for blast and radiation calculations by Glasstone and Nukemap, please click here The deceptive bigots protraying themselves as Federation of American Scientists genuine communist disarmers in the Marx media including TV scammers have been suppressing the truth to sell fake news since 1945 and in a repetition of the 1920s and 1930s gas war media lying for disarmament and horror news scams that caused disarmament and thus encouraged Hitler to initiate the invasions that set off WWII!

Glasstone's Effects of Nuclear Weapons exaggerations completely undermine credible deterrence of war: Glasstone exaggerates urban "strategic" nuclear weapons effects by using effects data taken from unobstructed terrain (without the concrete jungle shielding of blast winds and radiation by cities!), and omits the most vital uses and most vital effects of nuclear weapons: to DETER world war credibly by negating the concentrations of force used to invade Belgium, 1914 (thus WWI) and Poland (WWII). Disarmament and arms control funded propaganda lying says any deterrent which is not actually exploded in anger is a waste of money since it isn't being "used", a fraud apparently due to the title and content of Glasstone's book which omits the key use and effect of nuclear weapons, to prevent world wars: this is because Glasstone and Dolan don't even bother to mention the neutron bomb or 10-fold reduced fallout in the the Los Alamos 95% clean Redwing-Navajo test of 1956, despite the neutron bomb effects being analysed for its enhanced radiation and reduced thermal and blast yield in detail in the 1972 edition of Dolan's edited secret U.S. Department of Defense Effects Manual EM-1, "Capabilities of Nuclear Weapons", data now declassified yet still being covered-up by "arms control and disarmament" liars today to try to destroy credible deterrence of war in order to bolster their obviously pro-Russian political anti-peace agenda. "Disarmament and arms control" charlatans, quacks, cranks, liars, mass murdering Russian affiliates, and evil genocidal Marxist media exposed for what it is, what it was in the 1930s when it enabled Hitler to murder tens of millions in war .

ABOVE: 11 May 2023 Russian state TV channel 1 loon openly threatens nuclear tests and bombing UK. Seeing how the Russian media is under control of Putin, this is like Dr Goebbels rantings, 80 years past. But this doesn't disprove the world war threat any more than it did with Dr Goebbels. These people, like the BBC here, don't just communicate "news" but attempt to do so selectively and with interpretations and opinions that set the stage for a pretty obviously hate based political agenda with their millions of viewers, a trick that worked in the 1st Cold War despite Orwell's attempts to lampoon it in books about big brother like "1984" and "Animal Farm". When in October 1962 the Russians put nuclear weapons into Cuba in secret without any open "threats", and with a MASSIVELY inferior overall nuclear stockpile to the USA (the USA had MORE nuclear weapons, more ICBMs, etc.), the media made a big fuss, even when Kennedy went on TV on 22 October and ensured no nuclear "accidents" in Cuba by telling Russia that any single accidentally launched missile from Cuba against any Western city would result in a FULL RETALITORY STRIKE ON RUSSIA. There was no risk of nuclear war then except by accident, and Kennedy had in his 25 May 1961 speech on "Urgent National Needs" a year and a half before instigated NUCLEAR SHELTERS in public basement buildings to help people in cities survive (modern concrete buildings survive near ground zero Hiroshima, as proved by declassified USSBS reports kept covered up by Uncle Sam). NOE THAT THERE IS A CREDIBLE THREAT OF NUCLEAR TESTS AND HIROSHIMA TYPE INTIMIDATION STRIKES, THE BBC FINALLY DECIDES TO SUPPRESS NUCLEAR NEWS SUPPOSEDLY TO HELP "ANTI-NUCLEAR" RUSSIAN PROPAGANDA TRYING TO PREVENT US FROM GETTING CREDIBLE DETERRENCE OF INVASIONS, AS WE HAD WITH THE W79 UNTIL DISARMERS REMOVED IT IN THE 90s! This stinks of prejudice, the usual sort of hypocrisy from the 1930s "disarmament heroes" who lied their way to Nobel peace prizes by starting a world war!

The facts from Hiroshima and Nagasaki for the shielding of blast and radiation effects by modern concrete buildings in the credible nuclear deterrence of invasions (click here for data) which - unlike the countervalue drivel that failed to prevent WW2 costing millions of human lives - worked in the Cold War despite the Western media's obsession with treating as Gospel truth the lying anti-nuclear propaganda from Russia's World Peace Council and its allies (intended to make the West disarm to allow Russian invasions without overwhelming, effective deterrence or opposition, as worked in Ukraine recently)!

Realistic effects and credible nuclear weapon capabilities are required now for deterring or stopping aggressive invasions and attacks which could escalate into major conventional or nuclear wars. Credible deterrence necessitates simple, effective protection against concentrated and dispersed invasions and bombing. The facts can debunk massively inaccurate, deliberately misleading CND "disarm or be annihilated" pro-dictatorship ("communism" scam) political anti-nuclear deterrence dogma. Hiroshima and Nagasaki anti-nuclear propaganda effects lies on blast and radiation for modern concrete cities is debunked by solid factual evidence kept from public sight for political reasons by the Marx-media, which is not opposed by the fashion-obsessed remainder of the media, and so myths sneak into "established pseudo-wisdom" by the back-door.

Tuesday, March 28, 2006

Welcome to the science of the Big Bang...

Quantum Gravity Successes





Above: a small scale supernova over Johnston Island, 9 July 1962. 1.4 Mt, 400 km altitude. Filmed from a mountain top 1,300 km to the east of the detonation.






Above: Checkmate nuclear test, 0.32 second after burst, 7 kilotons, 147 km altitude (photo taken from Johnston Island by a camera facing upwards)
Above: Checkmate nuclear test, 0.52 second
Above: Checkmate, 1.0 second
Above: Checkmate, 2.5 seconds. Dark zig-zag rocket trails are visible. (Because you are looking upward, you are seeing the fireball through the rocket exhaust of the missile which carried the bomb up to 147 km altitude, which took sufficient time for the different winds at all the altitudes from the surface to outer space to blow the exhaust around a little. This is explained in discussion of Checkmate and the other American high altitude 1958-62 tests: see chapter 1 of Philip J. Dolan's manual, Capabilities of Nuclear Weapons, U.S. Department of Defense, Defense Nuclear Agency, Washington, D.C., 1972, with revisions.)



Above: the CHECKMATE (7 kt, 147 km detonation altitude, 19 October 1962) fireball after it has just started to become striated along the natural magnetic field (vertically aligned in this photo taken from below the detonation). The dotty lines near the middle are the wind-blown exhaust trails left behind, below the fireball, by the rocket delivery system that carried the warhead up to the detonation altitude.


Above: CHECKMATE detonation horizontal view (seen from a distant aircraft) compared to the view looking upwards from Johnston Island. The analysis of CHECKMATE on the right was done by the Nuclear Effects Group at the Atomic Weapons Establishment, Aldermaston, and was briefly published on their website, with the following discussion of the 'UV fireball' regime which applies to bursts at altitudes of 100-200 km: 'the debris blast wave expands and sweeps up air which becomes very hot. This then radiates UV, which is readily absorbed by the cold air in front of the blast wave, resulting in ionised air which is approximately transparent to further UV radiation from the blast wave. These bursts are therefore characterised by two "fireballs" - the debris air blast wave expansion is preceded by a radiation/ionisation front. The radiation front will be up/down asymmetric since mean free paths are longer in the less dense air above the detonation altitude. An example is the CHECKMATE event where both fronts are clearly visible in the photograph taken from Johnston Island.'

When CHECKMATE was detonated during the Cuban missiles crisis: 'Observers on Johnston Island saw a green and blue circular region surrounded by a blood-red ring formed overhead that faded in less than 1 minute. Blue-green streamers and numerous pink striations formed, the latter lasting for 30 minutes. Observers at Samos saw a white flash, which faded to orange and disappeared in about 1 minute.' (Defense Nuclear Agency report DNA-6040F, AD-A136820, p. 241.)

How would a 1055 megaton hydrogen bomb explosion differ from the big bang? Ignorant answers biased in favour of curved spacetime (ignoring quantum gravity!) abound, such as claims that explosions can’t take place in ‘outer space’ (disagreeing with the facts from nuclear space bursts by Russia and America in 1962, not to mention natural supernova explosions in space!) and that explosions produce sound waves in air by definition! There are indeed major differences in the nuclear reactions between the big bang and a nuclear bomb. But it is helpful to notice the solid physical fact that implosion systems suggest the mechanism of gravitation: in implosion, TNT is well-known to produce an inward force on a bomb core, but Newton's 3rd law says there is an equal and opposite reaction force outward. In fact, you can’t have a radially outward force without an inward reaction force! It’s the rocket principle. The rocket accelerates (with force F = ma) forward by virtue of the recoil from accelerating the exhaust gas (with force F = -ma) in the opposite direction! Nothing massive accelerates without an equal and opposite reaction force. Applying this fact to the measured 6 x 10-10 ms-2 ~ Hc cosmological acceleration of matter radially outward from observers in the universe which was predicted accurately in 1996 and later observationally discovered in 1999 (by Perlmutter, et al.), we find an outward force F = ma and inward reaction force by the 3rd law. The inward force allows quantitative predictions, and is mediated by gravitons, predicting gravitation in a checkable way (unlike string theory, which is just a landscape of 10500 different perturbative theories and so can’t make any falsifiable predictions about gravity). So it seems as if nuclear explosions do indeed provide helpful analogies to natural features of the world, and the mainstream lambda-CDM model of cosmology - with its force-fitted unobserved ad hoc speculative ‘dark energy’ - ignores and sweeps under the rug major quantum gravity effects which increase the physical understanding of particle physics, particularly force unification and the relation of gravitation to the existing electroweak SU(2) x U(1) section of the Standard Model of fundamental forces.

Richard Lieu, Physics Department, University of Alabama, ‘Lambda-CDM cosmology: how much suppression of credible evidence, and does the model really lead its competitors, using all evidence?’, http://arxiv.org/abs/0705.2462.

Even Einstein grasped the possibility that general relativity's lambda-CDM model is at best just a classical approximation to quantum field theory, at the end of his life when he wrote to Besso in 1954:

‘I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, [non-quantum] gravitation theory included ...’

‘Science is the organized skepticism in the reliability of expert opinion.’ - Professor Richard P. Feynman (quoted by Professor Lee Smolin, The Trouble with Physics, Houghton-Mifflin, New York, 2006, p. 307).






In later posts, I'll review all declassified nuclear test data related to the space bursts in 1962, particularly the 7 kiloton Checkmate shot 147 km above Johnston Island. I'll also review books and manuals by Glasstone and Dolan, the reports by Brode, and many others, not only related to fireballs in nuclear space burst tests, but also the more complicated surface bursts with their intricate initial radiation, EMP, cratering, blast wave, thermal flash, and residual fallout.

To begin with, my interest in the big bang stems from cosmology. I had been interested in the big bang since 1982. At that time I was 10, and wanted to know what evidence there was that time and space had been created in the big bang. Any question was answered by merely ignoring the question and restating the evidence for the big bang, which I was not disputing anyway! It is telling that the greatest expert on general relativity, Sir Roger Penrose, has recently proposed a theory of what happened before the big bang, and this fact explains that general relativity is a mathematical statement of physical facts.

Without a full theory of quantum gravity, there are still uncertainties. Anybody making statements of the 'there was no spacetime fabric and no time before the big bang' is actually just plain ignorant of general relativity: see Penrose's statement at http://news.bbc.co.uk/2/hi/programmes/hardtalk/4631138.stm

Eventually I was given a book on general relativity, which disproved John Gribbin's and his friend Chown's popularising claims that the big bang is not a real explosion in a spacetime fabric. Their claims seem to stem from ignorance.

http://www.math.columbia.edu/~woit/wordpress/?p=273#comment-5322:

Who, on big bang religion, please note Erasmus Darwin (1731-1802), father of Charles the evolutionist, first defended the big bang seriously in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

Darwin was trying to apply science to Genesis. The big bang has never been taken seriously by cosmologists, because they have assumed that curved spacetime makes the universe boundless and such like. So a kind of belief system in the vague approach to general relativity has blocked considering it as a 1055 megatons space explosion. Some popular books even claim falsely that things can’t explode in space, and so on.



In reality, because all gravity effects and light come to us at light speed, the recession of galaxies is better seen as a recession speed varying with known time past, than varying with the apparent distance. Individual galaxies may not be accelerating, but what we see and the gravity effects we receive at light speed come from both distance and time past, and in that reference frame the universe is accelerating.

The implication of this comes when you know the mass of the universe is m, because then you remember Newton’s 2nd law, F =m a so you get outward force. His 3rd law then tells you there’s equal inward force (graviton mediated). We get gravity within 1.7%. This is censored by string theory dominated arXiv.org

http://www.math.columbia.edu/~woit/wordpress/?p=289#comment-5846

November 8th, 2005 at 11:44 am :

“Data is the only judge in the court of science. Principles, no matter how sounding, can not be used to make a judgement regarding a theory.” - Quantoken

What about the principles of Copenhagen quantum mechanics, like Bohr’s beloved Correspondence and Complementarity? Or Einstein’s principles? The maths of Bohr and Einstein is very elegant and of course consistent numerically with reality. What people argue is that there are other ways of getting the same results without using the same principles, at least regarding SR and Copenhagen principles.

It seems that Bohr and Einstein didn’t notice that there were other ways of getting the same maths without using the same philosophy. FitzGerald, Lorentz and Larmor, had the testable formulae of SR. Einstein still didn’t notice this when he gave his “Ether and Relativity” lecture at Leyden in 1920. His biographer Pais wrote that he (Pais) first gave Einstein the Poincare’s relativity papers of 1904 in the early 1950s. Einstein asked Born to acknowledge Poincare’s work. Pais says he (Pais) was angry with Born for praising Poincare too highly, since Poincare used 3 postulates and Einstein used only 2. The whole story makes me nauseous. The love of monk Ockham’s razor is just absurd. You don’t find biologists or chemists banning biological or chemical mechanisms as superfluous or unnecessary difficulties. The lack of mechanism for forces allows string theorists to claim they are copying the guessed principle philosophy of Bohr and Einstein.

November 7th, 2005 at 12:07 pm
The “flood of experimental data” still exists and remains to be analysed, the mass ratio of muon to electron and other particles, and the coupling constants of electroweak to strong nuclear and gravitation.

Theories were not “rapidly tested and those found wanting rejected”.

Take the case of Gell-Mann’s quarks versus Zweig’s aces. Zweig wrote a more detailed paper and was suppressed by a big American journal while he was in Europe at CERN, while Gell-Mann in America from experience was shrewd enough to submit his briefer and less substantiated paper to a small European journal which printed it. (Let’s not get involved in the issue of Zweig never getting a Nobel prize, as Gell-Mann officially got it for symmetry work.)

Arthur C. Clarke once said that any sufficiently advanced technology is indistinguishable from magic. This is the fate of any revolutionary idea, which by definition (being revolutionary) is in conflict with preconceived ideas like string theory. It is very easy to weed out reality, to flush the baby away with the bath water. It is a different matter to take a heretical idea seriously. Everyone can see it is absurd and obviously wrong. I think this is why the Soviet Union, having lost an enormous amount in WWII, still managed to beat America into space with Sputnik. The mainstream is always too prejudiced in favour of yesterday’s methods to be really serious about science.

Links to more info:

Quantum Field Theory domain
Quantum Gravity blog
Los Alamos Science journal
Excellent particle physics gauge theory (fundamental force interaction) issue of Los Alamos Science journal

Background info:

http://www.math.columbia.edu/~woit/wordpress/?p=273#comment-5322
http://www.math.columbia.edu/~woit/wordpress/?p=353&cpage=1#comment-8728
http://www.math.columbia.edu/~woit/wordpress/?p=215#comment-4082

----------------------

The political corruption of scientific journalism

Spiked Science
Article26 April 2001
Eco-evangelism

by Helene Guldberg

When I returned home from the horrendous event that was the New Scientist's UK Global Environment Roadshow, I got very little sympathy from my flat-mates. 'But what on earth did you expect?', they retorted. 'Look at the leaflet. It says it all.'

They had a point. The headline read 'New Scientist presents: Judgement Day - the Global Environment Roadshow'. It went on: 'Find out how wholly unexpected forces, such as global warming, pollution, ozone-layer destruction, water shortages and soil degradation could combine in new and terrifying ways to produce global nightmares nobody predicted.' And the blurb finished with the call to: 'measure your contribution to human survival on Earth by going through a personal evaluation to find out how much damage you do to the environment. Discover if you are an Angel or a Devil.'

But despite this, I had a notion - since the debate was organised by the New Scientist for an adult audience - that this would at least be open to rational discussion. Not a chance. Rather than an appraisal of scientific evidence about climate change, pollution, soil erosion and water shortages, we were presented with the absolutely worst-case scenarios for what could happen to us - as if they were fact - and the very grave possibility that we could self-destruct.

Jeremy Webb, editor of the New Scientist, started by emphasising that human beings have 'as much destructive potential' as that which brought about former mass extinctions - where up to 90 percent of species were wiped out. Just look at BSE (What? How many bovine species have gone extinct as a result of BSE?), HIV (Again, what does this tell us about the human destructive potential?) and global warming (But, Jeremy, the history of the planet has been one of far greater temperature fluctuations than those predicted for the coming century).

We were then given three presentations of possible doomsday scenarios.

First, global warming (we were given a weather forecast for 2050, with the UK in a mini Ice-Age, the east coast of the USA flooded, the west coast swamped by malaria-carrying mosquitoes, South America wreaked by forest fires, and so on); second, pollution; and third, overpopulation.

Webb asked - after the presentations - whether there was anybody who still was not worried about the future. In a room full of several hundred people, only three of us put our hands up. We were all asked to justify ourselves (which is fair enough). But one woman, who believed that even if some of the scenarios are likely, we should be able to find solutions to cope with them, was asked by Webb whether she was related to George Bush!

When I pointed out that none of the speakers had presented any of the scientific evidence that challenged their doomsday scenarios, Webb just threw back at me, 'But why take the risk?' What did he mean: 'Why take the risk of living?' You could equally say 'Why take the risk of not experimenting? Why take the risk of not allowing optimum economic development?' But had I been able to ask these questions, I suppose I would have been accused of being in bed with Dubya.

One of the speakers responded to my point, that there is evidence that the air is cleaner today than several decades ago (rather than 'being turned into a sewer'). Yes, she said, that may be true, but clean air can also be a problem.

The experience was like attending a religious meeting - a mass confessional, in fact, where 'we are all sinners', but some of us sin more than others. The exercise at the end of the event (which dragged on for what seemed like an eternity) set out to assess our individual footprints on planet Earth. One woman was presented with a halo (literally) for scoring less than 200 points on her evaluation (equivalent to about a quarter of the European average); and a poor unfortunate boy was presented with horns (he was the devil) for scoring almost twice the European average.

This lad was rather bemused as to how he had managed to amass such a high score - and, to be honest, he did not look like a particularly extravagant character. But I suppose it is easy to tot up the points - and maybe he was just more honest than the rest of the audience (like a spoil-sport, I refused to participate). All you need to do is travel to Australia (which he had) and you have already totted up 300 points. Then if you bath every day (and do not share it with others in your household), travel to work by car (tut, tut), do not buy locally produced fresh products, and do not recycle your paper, then your points will quickly rocket.

If this event had been organised by Greenpeace or Friends of the Earth, I suppose I would not have got so agitated. But this kind of moralism dressed up as science - without any opportunity for rational debate - makes my blood boil.


For more information about Jeremy Webb, see the encyclopedia article linked here.

A dark ideology is driving those who deny climate change

People who claim that climate science is a conspiracy or the work of charlatans are talking rubbish

Robin McKie, The Observer, Sunday 1 August 2010

Our planet may burn, millions may die, and cities such as Moscow and New York may smoulder, but at least we will be free of petty regulation and bureaucracy.


--------------------------------

Sunday, August 01, 2010
Guardian: Devil is driving climate deniers
Dr Lubos Motl
The Reference Frame

The Observer has printed a rather incredible piece that makes most of the propaganda pieces of Nazism and communism look like friendly fairy-tales for children:

A dark ideology is driving those who deny climate change

The deniers are wrong because 2,000 mostly drunk people around Moscow drowned in the rivers and lakes and because it was a warm weather in Moscow. Why were they drunk? Because you're deniers, dear TRF readers!

The author of the piece, Robin McKie, uses an amusing collection of irrational and downright bizarre sources to sling mud on the climate
skeptics. One of them is a book called Merchants of Doubt by Naomi Oreskes and Erik Conway.

Click the link for a review of the book.


These two "pundits" have raised one of the most serious accusations ever invented against the climate skeptics: some of the oldest or dead climate skeptics have actually dared to oppose the Soviet communism! Wow.

The most radical ones even didn't want to allow the Soviet Union to exterminate the evil U.S. capitalists by nukes which would be fair - and the real fringe has even supported the Star Wars that helped the Devil named Reagan to beat the best social system in the world.

This is quite an explosive accusation! ;-) Imagine Naomi Oreskes who is having sex with an inflatable Stalin 16 hours a day as she manages to discover this genuine bombshell that must surely put the final nail in the deniers' coffin. ;-)




Reagan, the denier-in-chief, was caught as speading jokes about the USSR.

More seriously, I find it kind of amazing what kind of arguments these folks are using these days. The legitimate concern is that those panicked people are fanatical defenders of the extreme forms of the "updated" communism - and what their arguments are actually doing is to provide us with a more robust proof of this observation than anything that could actually be made up.

Fortunately, a vast majority of the readers don't buy these articles and would-be arguments anymore: check the discussion in the Guardian. The AGW movement is returning to the status it deserves - and it surely deserves to be on par with the fringe nationalist or Marxist groups.


In the case of global warming, the reality of global warming is that the temperature on this planet is never constant! It is always either increasing or decreasing! Hence, you can predict a 50% chance at any random time in history that the temperature will be rising, and a 50% chance that it will be falling. It’s always one or the other.

Moreover, the temperature has been almost continuously rising for 18,000 years when the last ice age started to thaw, so for this period the expectancy of warming is higher than 50%. Over the past 18,000 years global warming has caused the sea levels to rise 120 metres, a mean rise of 0.67 cm/year, with even higher rates of rise during part of this time. In the century of 1910-2010, sea levels have risen linearly by a total of 20 cm or a mean rate of rise of 0.2 cm/year.

Hence, the current rate of rise of the oceans (0.2 cm/year) is less than one third the average rate which naturally occurred over the past 18,000 years (0.67 cm/year). This tells you that the current rate of global climate change flooding risks is not record-breaking, and is not unprecedented in history.

Likewise, the political lying and scientific deception about the effects from nuclear weapons during the Cold War to support communist efforts to end the arms race before it bankrupted communism, continues today.

Update (26 Feb 2011):

The Standard Model and Quantum Gravity: Identifying and Correcting Errors



Above: spin-1 quantum gravity illustration from the old 2009 version of quantumfieldtheory.org (a PDF linked here, containing useful Feynman quotations about this). To hear to a very brief Feynman tongue-in-cheek talk on spin-1 graviton mechanism problems, please click here.

[youtube=http://www.youtube.com/watch?v=tcOHcAVpd8M&w=480&h=390]
Above: the dilemma of "looking clever" or being humble and honestly searching for the facts, no matter how "heretical" or unexpected they turn out to be. This review of Surely You're Joking My Feynman is a lot better than the autobiography itself which rambles on a lot and needs severe editing for busy readers, like all of Feynman's books. Feynman does relate several incidents that led him to the conclusion that a major error in fashionable consensus is groupthink. Working on the bomb at Los Alamos, he found he could break into any secret safe very easily. People left the last digit of their combination on the lock dial, and he could extrapolate the other digits using logic about the simple mind of the physicist or mathematician. E.g., a 3 digit combination safe showing 7 implies the combination 137, 4 implies the combination 314, 1 implies 271, and so on. When a very complex safe of a military top brass was opened very quickly by the locksmith at Los Alamos, Feynman spent weeks getting to know the guy to find out the "secret". It turned out that there was no magic involved: the combination that opened the safe was simply the safe manufacturer's one, which the top brass hadn't got around to changing! Feynman was then told by a painter that he made yellow by mixing white and red paint, which sounded like "magic". After a mishap (pink), he went back to the painter, who informed him he added yellow to the mixture to give it the right tint. Another time, he was falsely accused of being a magician by fixing radios by switching over valves/vacuum tubes (they used the same kind of vacuum tube in different circuits, so an old output amplifier tube which was failing under high current could be switched for a similar valve used for lower currents in a pre-amplifier circuit, curing the problem). In a later book, What Do You Care What Other People Think Feynman's time on the Presidential investigation into NASA's January 1986 Challenger explosion is explained. Upon close inspection, Challenger was blown up by engineers not in a mistake involving some weird mathematical error of the fashionable magical "rocket science" that is supposedly beyond mortal understanding, but just regular groupthink delusion: the low-level engineers and technicians in charge of O-rings knew that rubber turns brittle at low temperatures in cold weather, and that brittle rubber O-rings sealing the Challenger booster rockets would leak fuel as the rocket vibrated, and they knew that gravity and air drag would cause the leaking fuel to run towards the rocket flames, blowing it up.

However, those technicians who knew the facts had Orwellian doublethink and crimestop: if they made a big scene in order to insist that the Challenger space shuttle launch be postponed until warmer weather when the rubber O-ring seals in the boosters would be flexible and work properly, they would infuriate their NASA bosses at launch control and all the powerful senators who had turned up to watch the Challenger take off, so the NASA bigwigs might give contracts to other contractors in future. They would be considered unAmerican fear-mongers, decrepid incompetent fools with big egos. It was exactly the same for the radar operators and their bosses at Pearl Harbor. There are no medals given out for preventing disasters that aren't obvious threats splashed over the front pages of the Washington Post. It was not 100% certain the shuttle would explode anyway. So they crossed their fingers, said little, and nervously watched Challenger blow up on TV. Feynman was told the truth not by fellow committee investigator Neil Armstrong, or by any NASA contractor (they were just as good at covering up afterwards as keeping quiet beforehand), but by the military missile expert who investigated the 1980 Arkansas Titan military missile explosion. Feynman used a piece of rubber and a plastic cup of iced water to expose the cause at a TV news conference, but the media didn't want to know about the corruption of science and peer-reviewed risk prediction rubbish in NASA's computers and groupthink lies. His written report was nearly censored out, despite the committee chairman being a former student! It was included as a minority report, Appendix F, which concluded that NASA safety analyses were a confidence trick for public relations:

"... reality must take precedence over public relations, for Nature cannot be fooled."


Nobel Laureate Professor Brian Josephson emailed me (exchanged email PDFs are located here and here) that he used 2nd quantization in his Nobel Prize QM calculations, but is still stuck in 1st quantization groupthink when it comes to "wavefunction collapse" in the EPR paradox! Er, Brian, nobody has ever seen an epicycle or a wavefunction! Nobody has ever measured an epicycle or wavefunction! Schrodinger guessed H*Psi = -i*h-bar*d{Psi}/dt. This is a complex transmogrification from Maxwell's displacement current law, {energy transfer rate} = constant*dE/dt (for energy transfer via "electric current" flowing by the vacuum through an electric field E effect, akin to half-a-cycle of a radio wave). Note that H*Psi = -i*h-bar*d{Psi}/dt is a relativistic equation (it is only non-relativistic when his non-relativistic Hamiltonian H for energy is included; Dirac's equation is no different Schroedinger's except in replacing H with a relativistic spinor where particle spin is included, hence making the law relativistic). Dirac later showed that H*Psi = -i*h-bar*d{Psi}/dt is "analogous to" its solution, Psit/Psi0 = exp(-iHt/h-bar), which Feynman modified with -Ht = S with action S defined in units of h-bar, so the "wavefunction" (epicycle) varies in direct proportion to exp(iS). This creates the complex circle (rotation of a unit length vector on a Argand diagram, as a cyclic function of S). Feynman in his 1985 book QED reduced this exp(iS) using Euler's "jewel" to simply cos S, where the lagrangian for S is expressed so that the direction of the vector is fixed as the relativistic axis (the relativistic axis is the simple direction of the arrow for the path of zero action S = 0, because the "relativistic action" is actually defined as that action which is invariant to a change of coordinates!). So we now have the "reinvented wheel" called a Euclidean circle, whose resultant in the on-shell or relativistic axis is simply the scalar amount cos S, for each path. This gets rid of complex Hilbert space and with it, Haag's theorem as an objection to the mathematical self-consistency of renormalized QFT. All photons have 4 polarizations (like virtual or off-shell photons), not just 2 polarizations (as presumed from direct measurements). The extra 2 polarizations determine the cancellations and additions of phases: there is no "wavefunction collapse upon measurement". The photon goes through both slit in Young's experiment and interferes with itself, with no need for an observer. As Feynman writes in QED (1985) we don't "need" the 1st quantization "uncertainty principle" if we sum the paths.

[youtube=http://www.youtube.com/watch?v=wMFPe-DwULM&hl=en_US&feature=player_embedded&version=3]
Above: here we have Feynman pushed to explain why similar poles of magnets repel, using it as an excuse to talk about why ice is slippery and why good husbands call an ambulance for their wives who slip on the ice and break their hip, unless they are drunk and violent. He does end up saying that he can't explain why magnets repel in terms of anything else with which the non-mathematician is familiar. However, in his 1985 book QED he explains that virtual photons are exchanged between magnets, and this process creates the magnetic force field. The problem for Feynman was knowing what the virtual photon wavefunction means physically. In the 1985 book, he draws pictures of a rotating arrow accompanying each virtual photon, that rotates in step with the frequency of oscillation of the photon, i.e. each oscillation of the virtual photon is accompanied by a full rotation of the phase factor (which is the "hidden variable" behind the so-called "wavefunction", itself just an epicycle from 1st quantization, with no direct physical reality behind it, despite obfuscation efforts from the "nobody understands quantum mechanics"-Gestapo and the "parallel worlds" fantasy of Hugh Everett III and, with varying laws of nature, the 10500 parallel universes of the superstring theory Gestapo/arXiv "peer"-reviewers).

[youtube=http://www.youtube.com/watch?v=lytxafTXg6c&hl=en_US&feature=player_embedded&version=3]

Above: like Dr Zaius said to Charlton Heston in 1968, don't search for the facts if you have a weak stomach. It might turn out that a "unified theory" is analogous to merely a bunch of bananas, so many groupthink "bury my head in the sand" simplicity-deniers will feel sub-ape because they can't, don't, and won't be assed to put two sticks together to reach the facts. A pretty good example, discussed in detail in one way later in this post and in other ways two posts back, is Einstein's relativity, which has multiple levels of explanation. The strongest formulation of relativity is the statement that our laws of motion must give the same predictions regardless of the chosen reference frame, i.e. we get the same prediction of the reference frame is that of the earth or that of the sun. This makes the laws "invariant" of the selected reference frame. Then there are progressive weaker formulations of relativity, used in "simplified" explanations for the layman, such as "there is no spacetime fabric, there is nothing in space which can produce forces", or "relativity doesn't say an absolute reference frame is unnecessary for doing our sums, relativity actually disproves the existence of any absolute reference frame!"

These "simplified" relativism "explanations" are a continuation of the best traditions of Egyptian priesthood and the Pythagorean mathematical cult. The objective of science is to act as a magician, to make the masses of the people believe whatever you say, "trust me with political power, I'm a scientist!" Then you trust them and you get mishaps, because they turn out to be humans, or more often than not, subhumans, even subape! Hence the mishaps of caloric, phlogiston, Maxwell's mechanical gear cog aether, Kelvin's stable vortex atom, Piltdown Man, peer-review, unprecedented climate change, nuclear winter theory, lethal cobalt bomb theory, superstring, etc. Groupthink science is not the kind of thing Newton and Darwin were doing, or Feynman was doing before and during the 1948 Pocono conference. Groupthink science education doesn't train people to put up with the taunts for doing unorthodox revolutionary work, so it progresses only slowly and haltingly, the "alternative ideas" are developed slowly, with the mainstream ignoring it.

[youtube=http://www.youtube.com/watch?v=3Un7u2AZnjw&w=480&h=390]

Above: Dr Zaius is alive and well, ensuring that consensus censors facts, as shown in this BBC propaganda programme, Horizon: Science Under Attack where groupthink pseudophysics is labelled "science" and the facts are dismissed because they have been censored out by "peer"-review pseudoscientific bigotry. Telegraph online Journalist James Delingpole, who exposed to the world the "hide the decline" climategate email of Dr Phil Jones is dismissed by Dr Zaius on the pretext that people must define science as the consensus of "peer"-reviewed literature. Great. So we can go on pretending that there is nothing to worry about, and using "peer"-review to prevent human progress. Ah, if only it were that easy to sweep the facts under the carpet or wallpaper over them. A PDF version of the errors in the BBC Horizon: Science Under Attack episode is located here, with additional relevant data (20 pages, 2 MB download). To read the 1960s background about Dr Zaius, see the wikipedia page linked here: "Zaius serves a dual role in Ape society, as Minister of Science in charge of advancing ape knowledge, and also as Chief Defender of the Faith. In the latter role, he has access to ancient scrolls and other information not given to the ape masses. [Dr Phil Jones and the FOIA/Freedom of Information Act "harrassment" controversy.] Zaius ... blames human nature for it all. Zaius seems to prefer an imperfect, ignorant ape culture that keeps humans in check, to the open, scientific, human-curious one ... The idea of an intelligent human ... threatening the balance of things frightens him deeply."

"The common enemy of humanity is man. In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then, is humanity itself." - Club of Rome, The First Global Revolution (1993). (That report is available here, a site that also contains a very similar but less fashionable pseudoscientific groupthink delusion on eugenics.)

The error in the Club of Rome's groupthink approach is the lie that the common enemy is humanity. This lie is the dictatorial approach taken by paranoid fascists, both on the right wing and the left wing, such as Stalin and Hitler. (Remember that the birthplace of fascism was not Hitler's Germany, but Rome in October 1914, when the left-wing, ex-communist Mussolini joined the new Revolutionary Fascio for International Action after World War I broke out.) The common enemy of humanity is not humanity but is fanaticism, defined here by the immoral code: “the ends justify the means”. It is this fanaticism that is used to defend exaggerations and lies for political ends. Exaggeration and lying about weapons effects in the hope it will be justified by ending war is also fanaticism. Weapons effects exaggerations both motivated aggression in 1914, and prevented early action against Nazi aggression in the mid-1930s.

From: Phil Jones

To: "Michael E. Mann"
Subject: HIGHLY CONFIDENTIAL
Date: Thu Jul 8 16:30:16 2004

... I didn't say any of this, so be careful how you use it - if at all. Keep quiet also that you have the pdf. ... I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is! ...

- Dr Phil Jones to Dr Michael Mann, Climategate emails, July 8th 2004

.


For NASA’s “peer-review” suppression of its own climate research contractor, please see:

http://nige.files.wordpress.com/2011/02/dr-miskolczi-nasa-resignation-letter-2005.pdf


“Since the Earth’s atmosphere is not lacking in greenhouse gases, if the system could have increased its surface temperature it would have done so long before our emissions. It need not have waited for us to add CO2: another greenhouse gas, H2O, was already to hand in practically unlimited reservoirs in the oceans. … The Earth’s atmosphere maintains a constant effective greenhouse-gas content [although the percentage contributions to it from different greenhouse gases can vary greatly] and a constant, maximized, “saturated” greenhouse effect that cannot be increased further by CO2 emissions (or by any other emissions, for that matter). ... During the 61-year period, in correspondence with the rise in CO2 concentration, the global average absolute humidity diminished about 1 per cent. This decrease in absolute humidity has exactly countered all of the warming effect that our CO2 emissions have had since 1948. ... a hypothetical doubling of the carbon dioxide concentration in the air would cause a 3% decrease in the absolute humidity, keeping the total effective atmospheric greenhouse gas content constant, so that the greenhouse effect would merely continue to fluctuate around its equilibrium value. Therefore, a doubling of CO2 concentration would cause no net “global warming” at all.”

- http://nige.files.wordpress.com/2011/02/saturated-greenhouse-effect-fact.pdf page 4.


CO2 only drives climate change in NASA and IPCC computer climate fantasies when positive-feedback from H2O water vapour is assumed. In the real world, there is negative feedback from H2O which cancels out the small effect of CO2 rises: the hot moist air rises to form clouds, so less sunlight gets through to surface air. Homeostasis! All changes in the CO2 levels are irrelevant to temperature variations. CO2 doesn’t drive temperature, it is balanced by cloud cover variations. Temperature rises in the geological record have increasing the rate of growth of tropical rainforests relative to animals, causing a fall in atmospheric CO2, while temperature falls kill off rainforests faster than animals (since rainforests can’t migrate like animals), thus causing a rise in atmospheric CO2. These mechanisms for CO2 variations are being ignored. Cloud cover variations prevent useful satellite data on global mean temperature, the effects of cloud cover on tree growth obfuscate the effects of temperature, and the effects of upwind city heat output obfuscate CO2 temperature data on weather stations. Thus we have to look to sea level rise rates to determine global warming.

We’re been in global warming for 18,000 years, during which time the sea level has risen 120 metres (0.67 cm/year mean, often faster than this mean rate). Over the past century, sea level has risen at an average rate of 0.20 cm year, and even the maximum rate of nearly 0.4 cm/year recently is less than the rates humanity has adapted to and flourished with in the past. CO2 annual output limits and wind farms etc are no use in determining the ultimate amount of CO2 in the atmosphere anyway: if you supplement fossil fuels with wind farms, the same CO2 simply takes longer to be emitted, maybe 120 years instead of 100 years. The money spent on lying “green” eco-fascism carbon credit trading bonuses can be spent on humanity instead.


For background info on how H2O cloud cover feedback cancelling CO2 variations on temperature has been faked in IPCC NASA "peer"-review bigotry, see http://www.examiner.com/civil-rights-in-portland/blacklisted-scientist-challenges-global-warming-orthodoxy and http://nige.files.wordpress.com/2011/02/the-saturated-greenhouse-effect-theory-of-ferenc-miskolczi.pdf:

(1) increased cloud cover doesn't warm the earth. True, cloud cover prevents rapid cooling at night. But it also reduces the sunlight energy received in the day, which is the source of the heat emitted during the night. Increase cloud cover, and the overall effect is a cooling of air at low altitudes.

(2) rainfall doesn't carry latent heat down to be released at sea level. The latent heat of evaporation is released in rain as soon as the droplets condense from vapour, at high altitudes in clouds. Air drag rapidly cools the drops as they fall, so the heat is left at high altitudes in clouds, and the only energy you get when the raindrops land is the kinetic energy (from their trivial gravitational potential energy).

Obfuscation of the fact that hot moist air rises and condenses to form clouds from oceans that cover 70% of the earth (UNLIKE any "greenhouse!!!!) caused this whole mess. IPCC models falsely assume that H2O vapour doesn't rise and condense into clouds high above the ground: they assume hot air doesn't rise! That's why they get the vital (FALSE) conclusion that H2O vapour doubles (amplifies) projected temperature rises from CO2, instead of cancelling them out! Their models are plain wrong.





[youtube=http://www.youtube.com/watch?v=bOhGrO7zi4Y&w=480&h=390]
Above: electric current is essentially displacement current under disguise. Juice in Joules coming out of wires isn't due to the 1 mm/second drift of conduction band electrons, so much as Heaviside energy current. Moreover, charge up a capacitor which has a vacuum for its "dielectric", and energy flows in at light velocity, has no mechanism to slow down, and when discharged flows out at light velocity in a pulse twice as long as its length and with just half the voltage (PD) of its static charged state. It turns out that the simplest way to understand electricity is as electromagnetic energy, so we're studying the off-shell field quanta of QED, which causes slow electric drift current more like a side-show than the main-show. So by looking at IC's published cross-talk experiments, we can learn about how the phase cancellations work. E.g., Maxwell's wave theory of light can be improved upon by reformulating it in terms of path integrals.


Above: Dr Robert D. Klauber in 2010 accidentally misrepresented Feynman 1985 book QED in terms of Feynman's earlier (complex) phase factor! The actual idea of Feynman in his 1985 book dispenses with the Argand diagram and converts exp (iS) into cos S (where S is in units of h-bar of course), as shown above. Notice that the path integral of cos S gives the resolved component of the resultant (final arrow) which lies in the x-direction only. To find the total magnitude (length) of the final arrow we simply have to choose the x-axis to be the direction of the resultant arrow, which is easy: the direction of the resultant is always that of the classical action, because the contributions to the resultant are maximized by the coherent summation of paths with the least amounts of action (the classical laws correspond to least action!). In other words, we don't need to find the direction of the quantum field theory resultant arrow in the path integral, we only need to find its length (scalar magnitude). We easily know the arrow's direction from the principle of least action, so the work of doing the path integral is then just concerned with finding the length, not the direction, of the resultant arrow. In practice, this is done automatically by relativistic formulation of the Lagrangian for action S. The definition of the path of least action as the real or "on shell" relativistic path automatically sets up the path integral coordinate system correctly.

"... every particle is associated with waves and these waves may be considered as a field. ... very close to the charges that are producing the fields, one may have to modify Maxwell's field theory so as to make it a non-linear electrodynamics. ... with field theory, we have an infinite number of degrees of freedom, and this infinity may lead to trouble [Haag's theorem implies that the renormalization process for taking account of field polarization is ambiguous and flawed if done in the complex, infinite dimensional "Hilbert space"]. We have to solve equations in which the unknown ... involves an infinite number of variables [i.e., an infinite number of Feynman diagrams for a series of ever more complicated quantum interactions, which affect the classical result by an ever increasing amount if the field quanta are massive and significantly charged, compared to the charges of the on-shell particles whose fields they constitute]. The usual method ... is to use perturbative methods in which ... one tries to get a solution step by step [by adding only the first few terms of the increasingly complicated infinite number of terms in the perturbative expansion series to the path integral]. But one usually runs into the difficulty that after a certain stage the equations lead to divergent integrals [thus necessitating an arbitrary "cutoff" energy to prevent infinite field quanta momenta occurring, as you approach zero distance between colliding fundamental particles]."

- Paul A. M. Dirac, Lectures on Quantum Mechanics, Dover, New York, 2001, pages 1, 2, and 84.



Above: Feynman's 1985 book QED is actually an advanced and sophisticated treatment of path integrals without mathematics (replacing complex space with real plane rotation of a polarization plane during motion along every possible path for virtual photons, as shown for reflection and refraction of light in the "sum over histories" given graphically above), unlike his 1965 co-authored Quantum Mechanics and Path Integrals. The latter however makes the point in Fig. 7-1 on page 177 (of the 2010 Dover reprint) that real particles only follow differentiable (smooth) "classical" paths when seen on macroscopic scales (where the action is much larger than h-bar): "Typical paths of a quantum-mechanical particle are highly irregular on a fine scale ... Thus, although a mean velocity can be defined, no mean-square velocity exists at any point. In other words, the paths are nondifferentiable." The fact that real paths are actually irregular and not classical when looked at closely is what leads Feynman away from the belief in differential geometry, the belief for instance that space is curved (which is what Einstein argued in his general relativity tensor analysis of classical motion). The idea that real (on shell) particle paths are irregular on very small scales was suggested by Schroedinger in 1930, when arguing (from an analysis of Dirac's spinor) that the half integer spin of a fermion moving or stationary in free space can be modelled by a "zig-zag" path which he called "zitterbewegung", which is a light-velocity oscillation with a frequency of 2mc2/h-bar or about 1021 Hz for an electron. Zitterbewegung suggests that an electron is not a static particle but is trapped light-velocity electromagnetic energy, oscillating very rapidly.

My recent comment to Dr Woit's blog:

"One of my criticisms of the two organizations would be that they don’t support research of the sort that Witten has had success with, at the intersection of mathematics and quantum field theory."

Do you think it possible that the future of quantum field theory could lie in a completely different direction, namely Feynman's idea of greater mathematical simplicity. E.g. the path integral sums many virtual particle path phase amplitudes, each of the form of Dirac's exp (-iHt) -> exp(iS). The sum over these histories is the path integral: the real resultant path is that for small actions S. Feynman in his 1985 book QED shows graphically how the summation works: you don't really need complex Hilbert space from the infinite number of Argand diagrams.

To plot his non-mathematical (visual) path integral for light reflecting off a mirror, Feynman shows that you can simply have a phase polarization rotate (in real not complex space) in accordance to the frequency of the light, e.g. what he does is to take Euler's exp(iS) = (i*sin S) + cos S and drop the complex term, so the phase factor exp(iS) is replaced with cos S, which is exactly the same periodic circular oscillation function as exp(iS), but with the imaginary axis replaced by a second real axis. E.g., a spinning plane of polarization for a photon! This gets rid of the objection of Haag's theorem, since you get rid of Hilbert space when you dump the imaginary axis for every history!

Feynman makes the point in his Lectures on Physics that the origin of exp(iS) is the Schroedinger/Dirac equation for energy transfer via a rate of change of a wavefunction (Dirac's of course has a relativistic spinor Hamiltonian), which just "came out of the mind of Schroedinger". It's just an approximation, a guess. Dirac solved it to get exp(-iHt) which Feynman reformulated to exp(iS). The correct phase amplitude is indeed cos S (S measured in units of h-bar, of course). Small actions always have phase amplitudes of ~1, while large actions have phase amplitudes that vary periodically in between +1 and -1, and so on average cancel out.

Are graphs mathematics? Are Feynman diagrams mathematics? Is mathematical eliticism (in the mindlessly complexity-loving sense) obfuscating a simple truth about reality?


Feynman points out in his 1985 book QED that Heisenberg's and Schroedinger's intrinsic indeterminancy is just the old QM theory of 1st quantization which is wrong because it assumes a classical coulomb field, with randomness attributed to intrinsic (direct) application of the uncertainty principle which is non-relativistic (Schroedinger's 1st quantization Hamiltonian treats space and time differently) and is unnecessary since Dirac's 2nd quantization shows that the field is quantized not the classical coulomb field. Dirac's theory is justified by predicting magnetic moments and antimatter, unlike 1st quantization. The annihilation and creation operators of the quantized field only arise in 2nd quantization, not in Schroedinger's 1st quantization where indeterminancy has no physical explanation in chaotic field quanta interactions:

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’


- Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

“… Bohr [at Pocono, 1948] … said: ‘… one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …”


- Richard P. Feynman, quoted in Jagdish Mehra’s biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248.

Bohr and other 1st quantization people never learned that uncertainty is caused by field quanta acting on fundamental particles like Brownian motion of air molecules acting on pollen grains. Feynman was censored out at Pocono in 1948 and only felt free to explain the facts after winning the Nobel Prize. In the Preface to his co-authored 1965 book Quantum Mechanics and Path Integrals he describes his desire to relate quantum to classical physics via the least action principle, making classical physics appear for actions greater than h-bar. But he couldn't make any progress until a visiting European physicist mentioned Dirac's solution to Schroedinger's equation, namely that the wavefunction's change over time t is directly proportional to, and therefore (in Dirac's words) "analogous to" the complex exponent, exp(-iHt). Feynman immediately assumed that the wavefunction change factor indeed is equal to exp(-iHt), and then showed that -Ht -> S, the action for the path integral (expressed in units of h-bar).

Hence, Feynman sums path phase factors exp(iS), which is just a cyclic function of S on an Argand diagram. In his 1985 book QED, Feynman goes further still and uses what he called in his Lectures on Physics (vol. 1, p. 22–10) the "jewel" and "astounding" (p. 22-1) formula of mathematics, Euler's equation exp(iS) = i (sin S) + cos S to transfer from the complex to the real plane by dropping the complex term, so the simple factor cos S replaces exp (iS) on his graphical version of the path integral. He explains in the text that the cos S factor works because it's always near +1 for actions small compared to h-bar, allowing those paths near (but not just at) least action to contribute coherently to the path integral, but varies cyclically between +1 and -1 as a function of the action for actions large compared to h-bar, so those paths in average will cancel out each other's contribution to the path integral. The advantage of replacing exp (iS) with cos S is that it gets rid of the complex plane that makes renormalization mathematically inconsistent due to the ambiguity of having complex infinite dimensional Hilbert space, so Haag's theorem no longer makes renormalization a difficulty in QFT.

The bottom line is that Feynman shows that QFT is simple, not amazingly complex mathematics: Schroedinger's equation "came out of the mind of Schroedinger" (Lectures on Physics). It's just an approximation. Even Dirac's equation is incorrect in assuming that the wavefunction varies smoothly with time, which is a classical approximation: quantum fields ensure that a wavefunction changes in a discontinuous (discrete) manner, merely when each quantum interaction occurs:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’


- R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

"When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn't enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be."


- Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

"You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences."


- Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

"Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come."


- Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.



Above: the path integral interference for light relies on the cancelling of photon phase amplitudes with large actions, but for the different case of the fundamental forces (gravitation, electromagnetism, weak and strong), the path integral for the virtual or "gauge" bosons involves a geometrical cancellation. E.g., an asymmetry in the isotropic exchange can cause a force! The usual objections against virtual particle path integrals of this sort are the kind of mindless arguments that equally would apply to any quantum field theory, not specifically this predictive one. E.g., physicists are unaware that the event horizon size for a black hole electron is smaller than the Planck length, and thus (in Planck's argument), a radius of 2GM/c2 is more physically meaningful as the basis for the grain-size cross-section for fundamental particles than Planck's ad hoc formulation of his "Planck length" from dimensional analysis. Quantum fields like the experimentally-verified Casimir radiation which pushes metal plates together don't cause drag or heating, they just deliver forces. "Critics" are thus pseudo-physicists!

My great American friend Dr Mario Rabinowitz brilliantly points out the falsehood of Einstein's general relativity "equivalence principle of inertial and gravitational mass" as the basis for mainstream quantum gravity nonsense in his paper Deterrents to a Theory of Quantum Gravity, pages 1 and 7 (18 August 2006), http://arxiv.org/abs/physics/0608193. General relativity is based on the false equivalence principle of inertial and gravitational mass, whereby Einstein falsely assumed that Galileo's law for falling bodies is accurate, whereas of course it is a falsehood, because the mass in any falling body is not accelerated purely by Earth's mass, but is also "pulling" the Earth upwards (albeit by a small amount in the case of an apple or human, where one of the two masses is relatively small compared to the mass of the Earth). But for equal masses of fundamental particles (e.g. for the simplest gravitational interaction of two similar masses) this violation of Galileo due to mutual attraction violates Einstein's equivalence principle as explained below by Dr Rabinowitz. Einstein forgot about the error in Galileo's principle when formulating general relativity on the basis of the equivalence principle (note that genuine errors are not a crime, unlike arrogantly continuing to use personal worship by the charlatan media as a sword to "defend" the errors of GR BS against genuine competent critics for another 40 years, which was Einstein's real crime against progress in science, which continues to this day under the guise of protecting a Jew no matter how factually wrong his physics is):

"As shown previously, quantum mechanics directly violates the weak equivalence principle in general and in all dimensions, and thus violates the strong equivalence principle in all dimensions. ...

"Most bodies fall at the same rate on earth, relative to the earth, because the earth's mass M is extremely large compared with the mass m of most falling bodies for the reduced mass ... for M [much bigger than] m. The body and the earth each fall towards their common center of mass, which for most cases is approximately the same as relative to the earth. ... When [heavy relative to earth's mass] extraterrestrial bodies fall on [to] earth, heavier bodies fall faster relative to the earth [because they "attract" the earth towards them, in addition to the earth "attracting" them; i.e., they mutually shield one another from the surrounding inward-converging gravity field of distant immense masses in the universe] making Aristotle correct and Galileo incorrect. The relative velocity between the two bodies is vrel = [2G(m + M)(r2-1 - r1-1)]1/2, where r1 is their initial separation, and r2 is their separation when they are closer.

"Even though Galileo's argument (Rabinowitz, 1990) was spurious and his assertion fallacious in principle - that all bodies will fall at the same rate with respect to the earth in a medium devoid of resistance - it helped make a significant advance [just like Copernicus's solar system with incorrect circular orbits and epicycles prior to Kepler's correct elliptical orbits, or Lamarke's incorrect early theory of "acquired characteristic" evolution pathing the way for Darwin's later genetic theory of evolution] in understanding the motion of bodies. Although his assertion is an excellent approximation ... it is not true in general. Galileo's alluring assertion that free fall depends solely and purely on the milieu and is entirely independent of the properties of the falling body, led Einstein to the geometric concept of gravity. [Emphasis added to key, widely censored, facts against GR.]"


“Einstein and his successors have regarded the effects of a gravitational field as producing a change in the geometry of space and time. At one time it was even hoped that the rest of physics could be brought into a geometric formulation, but this hope has met with disappointment, and the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like “metric,” “affine connection,” and “curvature,” but is not otherwise very useful. The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effect of gravitational fields on the motion of planets and photons or to a curvature of space and time.”

- Professor Steven Weinberg, Gravitation and Cosmology, Wiley, New York, 1972, p. 147.




Above: "Could someone please explain how or why, if, as SR tells us, c is the ceiling velocity throughout the Universe, and thus gravity presumably cannot propagate at a speed faster than the ceiling velocity, the Earth is not twice as far away from the Sun every thousand years or so which is the obvious consequence of gravity propagating at such a low speed as c and not, as everyone since Newton had always supposed, near-instantaneously?" - James Bogle (by email). Actually this supposed problem is down to just ignoring the facts: gravity isn't caused by gravitons between Earth and Sun; it's caused instead by exchange of gravitons between us and the surrounding immense distant masses isotropically distributed around us, with particles in the Sun acting as a slight shield.

If you have loudspeakers on your PC and use an operating system that supports sound files in websites, turn up the volume and visit www.quantumfieldtheory.org. The gravitons are spin-1 not spin-2, hence they are not going between the sun and earth but the sun is an asymmetry: the speed that "shadows" move is not light speed but infinite. This is because shadows don't exist physically as light velocity moving radiation! The sun causes a pre-existing shadowing of gravitons ahead of any position that the earth moves into, so the speed of the gravitons had nothing to do with the speed that the earth responds to the sun's gravity. The sun sets up an anisotrophy in the graviton field of space in all directions around it in advance of the motion of earth. The mainstream “gravity must go at light speed” delusions are based purely on the ignorant false assumption that the field only exists between earth and sun. Wrong. The sun's "gravity field" (anisotropy in graviton flux in space from distant immense masses) is pre-existing in the space ahead of the motion of the planet, so the speed of gravitational effects is instant, not delayed.

The exchange of gravitons between masses (gravitational charges) has a repulsive-only effect. The Pauli-Fierz spin-2 graviton is a myth; see the following two diagrams. Spin-1 gravitons we're exchanging with big distant masses result in a bigger repulsion from such massive distant masses (which are very isotropically distributed, around us in all directions) than nearby masses, so the nearby masses have an asymmetric LeSage shadowing effect: we're pushed towards them with the very accurately predicted coupling (diagrams below for quantitative proof) G by the same particles that cause the measured cosmological acceleration ~Hc. So we have an accurate quantitative prediction, predicting the numbers accurately via connecting the observed cosmological acceleration with gravitational coupling G, and this connection was made in May 1996 and was published two years before the cosmological acceleration was even discovered! Notice that the acceleration and expansion of the universe is not an effective expansion! I.e., as discussed later in this post, all the fundamental forces have couplings (e.g. G, alphaEM, etc.) that are directly proportional to the age of the universe. This is implied by the formula derived below, which is the correct quantum gravity proof for Louise Riofrio's empirical equation GM = tc3, where M is the mass of the universe and t is its age.

Edward Teller in 1948 made the erroneous claim that any variation (which had been predicted in a an error-filled guesswork way by Dirac) of G is impossible because it would vary the fusion rate in the big bang or in a star, but actually a variation of G does not have the effect Teller calculated because (as shown in this and several earlier posts) all fundamental couplings are varying, electromagnetic as well as gravity! Hence, although Teller was incorrect in claiming that a doubling of G increases fusion via proton-proton gravitational compression in a star or the big bang fusion: it can't do that, because the force of electromagnetic repulsion between colliding protons is increased by exactly the same factor as the gravitational force is increased! Therefore, fusion rates are essentially unaffected (uncharged particles like radiation pressure have a relatively small effect). Louise's investigation of a presumed of c with as the reciprical of the cube-root of the age of the universe in her equation is spurious: her GM = tc3 is instead evidence of a direct proportionality of G and age of universe t (the full reasons are explained in earlier posts). Now, galaxies, solar systems, atoms and nuclei are all orbital systems of stars, planets, shells of electrons and shells of nucleons, respectively, with their size controlled by fundamental forces by equations like F = m1m2G/r2
= m1v2/r or m1G/r = v2, so if m1 and v are constants while coupling G (or a Standard Model force coupling like alphaEM) is directly proportional to the age of the universe, it follows G must be directly proportional to radius r, so that the radius of a galaxy, solar system, atom or nucleus is directly proportional to the age of the universe t. If the horizon radius of the flat spacetime universe and the radii of galaxies, solar systems, atoms and nuclei are all directly proportional to t it follows that although distant matter is receding from us, the expansion of all objects prevents any relative change in the overall (scaled) universe: rulers, people, planets, etc. expand at the same rate, so receding galaxy clusters will not appear smaller. You might think that this is wrong, and that increasing G and alphaEM should pull the Earth's orbit in closer to the sun, and do the same for the electron. However, this more obvious solution or increasing orbital velocities it is not necessarily consistent with the very slow rate of increase of G, so that it's more consistent to think of a scaling up of sizes everything as force strengths increase: a stronger gravitational field can stabilize a galaxy of larger radius but containing the same mass! It is possible that a stronger alphaEM can stabilize a larger electron ground state radius; whether this is the case depends on whether or not the orbital velocity is altered as the electromagnetic coupling is varied.

However, if course, the Lambda-CDM cosmological model, basically a Friedmann-Walker metric from general relativity which implicitly assumes constant G is totally incorrect viewed from the new quantum gravity theory. Is spacetime really flat? From a naive extrapolating using the false old framework of cosmology, hyped by Sean Carroll and other bigots who refuse to accept these facts of quantum gravity proved over a decade ago, you might expect that the linear increase of G with age of the universe will cause the universe to eventually collapse. However, remember that the cosmological acceleration (a repulsion that supposedly flattens out the spacetime curvature on cosmological distance scales by opposing gravitation) is itself a quantum gravity effect: on the largest scales, mutual repulsion of masses predominates over LeSage shadowing and its pseudo-attraction.

Nevertheless, back in 1996 when we predicted the same cosmological acceleration using two completely different calculations, only one way was from the quantum gravity theory. The other correct prediction of the a ~ Hc cosmological acceleration was simply from the effect of spacetime on the Hubble v = HR observed recession rate law (see proof linked here). That analysis is a complementary duality with the similar prediction of comological acceleration from a different calculation method via quantum gravity, the cosmological acceleration should be viewed as an artifact of the problem that the receding galaxies we see are being seen at times in our past which are related to distances by R = ct. If we represent time since the big bang by t, then the time, T in our past of a supernova apparently distance R away is related to time t by simply t + T = 1/H. So the cosmological acceleration is just a result of the fact that radiation comes back to us at light velocity, not instantly. So if there was not a time delay, we wouldn't see any cosmological acceleration: the acceleration is physically being caused by the effective reference frame in which greater distances correspond to looking backwards in time. The universe horizon radius expands at the velocity of light, a linear expansion. This produces cosmological acceleration forces and thus gravitation due to the increasing time-lag for the exchange all forms of radiation, including gravitons. At the same time, masses and rulers expand by the mechanism already explained, so the relative scale of the universe remains constant while gravitation and cosmological acceleration operate.

Correction of mainstream errors in Electroweak Symmetry

Over Christmas, Dr Dorigo kindly permitted some discussion and debate over electroweak symmetry at his blog posting comments section, http://www.science20.com/quantum_diaries_survivor/blog/rumors_about_old_rumor, which helped to clarify some of the sticking points in the mainstream orthodoxy and possibly to highlight the best means of overcoming them in a public arena.

Some arguments against electroweak symmetry follow, mostly from replies to Dr Dorigo and Dr Rivero. The flawed logic of the "Higgs boson" assumption is based on the application of gauge theory for symmetry breaking to the supposed "electroweak symmetry" (never observed in nature). Only broken "electroweak symmetry", i.e. an absence of symmetry and thus separate electromagnetic and weak interactions, have actually been observed in nature. So the Higgs boson required to break the "electroweak symmetry" is an unobserved epicycle required to explain an unobserved symmetry! What's interesting is the nature of the groupthink "electroweak symmetry". Above the "electroweak unification" energy, there is supposed to be equality of electromagnetic and weak forces into a single electroweak force. Supposedly, this is where the massive weak bosons lose their mass and this gain light velocity, long range, and thus stronger coupling, equal in strength to the electromagnetic field.

This unification guess has driven other possibilities out of sight. There are two arguments for it. First, the breaking of Heisenberg's neutron-proton SU(2) chiral "isospin symmetry" leads to pions as Nambu-Goldstone bosons; so by analogy you can argue for Higgs bosons from breaking electroweak symmetry. This is unconvincing because, as stated, there is no electroweak symmetry known in nature; it's just a guess. (It's fine to have a guess. It's not fine to have a guess, and use the guess as "evidence" for "justifying" another guess! That's just propaganda or falsehood.) Secondly, the supposed "electroweak theory" of Weinberg and others. Actually, that they is better called a hypercharge-weak theory, since U(1) in the standard model is hypercharge, which isn't directly observable. The electromagnetic theory is produced by an adjustable epicycle (the Weinberg angle) that is forced to make the hypercharge and weak theories produce the electromagnetic field by ad hoc mixing. The prediction of the weak boson masses from the Weinberg angle isn't proof of the existence of an electroweak symmetry, because the weak bosons only have mass when the "symmetry" is broken. All evidence to date suggests that electroweak symmetry (like aliens flying around in UFOs) is just a fiction, the Higgs is a fiction, and mass is not generated through symmetry breaking. Yet so much hype based on self-deception continues.

The funny thing about the Glashow-Weinberg-Salam model is that it was formulated in 1967-8, but was not well received until its renormalizability had been demonstrated years later by ‘t Hooft. The electroweak theory they formulated was perfectly renormalizable prior to the addition of the Higgs field, i.e. it was renormalizable with massless SU(2) gauge bosons (which we use for electromagnetism), because the lagrangian had a local gauge invariance. ‘t Hooft’s trivial proof that it was also renormalizable after “symmetry breaking” (the acquisition of mass by all of the SU(2) gauge bosons, a property again not justified by experiment because the weak force is left-handed so it would be natural for only half of the SU(2) gauge bosons to acquire mass to explain this handedness) merely showed that the W-boson propagator expressions in the Feynman path integral are independent of mass when the momentum flowing through the propagator is very large. I.e., ‘t Hooft just showed that for large momentum flows, mass makes no difference and the proof of renormalization for massless electroweak bosons is also applicable to the case of massive electroweak bosons.

‘t Hooft plays down the trivial physical nature of his admittedly mathematically impressive proof since his personal website makes the misleading claim: “…I found in 1970 how to renormalize the theory, and, more importantly, we identified the theories for which this works, and what conditions they must fulfil. One must, for instance, have a so-called Higgs-particle. These theories are now called gauge theories.”

That claim that he has a proof that the Higgs particle must exist is totally without justification. He merely showed that if the Higgs field provides mass, the electroweak theory is still renormalizable (just as it is with massless bosons). He did not disprove all hope of alternatives to the Higgs field, so he should not claim that! He just believes in electroweak theory and won a Nobel Prize for it, and is proud. Similarly, the string theorists perhaps are just excited and proud of the theory they work on, and they believe in it. But the result is misleading hype!

I'm not denying that the interaction strengths run with energy and may appear to roughly converge when extrapolating towards the Planck scale. You get too much noise from hadron jets when doing such collisions, to get an unambiguous signal. Even if you just collide leptons at such high energy, hadrons are created in the pair production at such high energies, and then it's a reliant on extremely difficult QCD jet calculations to subtract the gluon field "noise" before you can see any signals clearly from the relatively weak (compared to QCD) electromagnetic and weak interactions.

I'm simply pointing out that there is no evidence given for electroweak symmetry, by which I refer not to the weak bosons losing their mass at high energy. I don't accept as evidence for electroweak symmetry a mere (alleged) similarity of the weak and electromagnetic cross-sections at very high energy (differing rates of running with energy in different couplings due to unknown vacuum polarization effects could cause apparent convergence simply by coincidence, without proving a Higgs field mechanism or the existence of electroweak symmetry). It's hard to interpret the results of high energy collisions because you create hadronic jets which add epicycles into the calculations needed to deduce the relatively small electromagnetic and weak interactions. The energies needed to try to test for electroweak symmetry are so high they cause a lot of noise which fogs the accuracy of the data. If you wanted to use these HERA data to prove the existence of electroweak symmetry (massless weak bosons), you would need to do more than show convergence in the cross-sections.

"I am talking about very clean events of hard deep inelastic scattering, where the bosons are seen with great clarity due to their leptonic decays."

You're thinking possibly about weak SU(2) symmetry and electromagnetic symmetry, and you think these two separate symmetries together as "electroweak symmetry". I'm 100% behind the extensive evidence gauge theory for weak interactions and 100% behind gauge theory for electromagnetic interactions. These separate symmetries, produced in the "electroweak theory" by mixing U(1) hypercharge boson with the SU(2) bosons, are not however "electroweak symmetry", which only exists if massless weak bosons exist at very high energy. The Higgs field is supposed to give mass to those bosons at low energy, breaking the symmetry. At high energy, the weak bosons are supposed to lose mass, allowing symmetry of weak isospin and electromagnetic interactions by making the range of both fields the same.

I really need to find any alleged evidence for "electroweak symmetry" in my research for a paper, so if you ever recall the paper with the HERA data which you say contains evidence for electroweak symmetry, please let me know! So far I've read all the QFT books I can get (Weinberg, Ryder, Zee, etc.) and electroweak theory papers on arXiv, and I have not found any evidence for electroweak symmetry.

My understanding (correct me if I'm wrong here) is that if you collide protons and electrons at TeV energies, you knock free virtual quarks from the sheer energy of the collision? These virtual quarks gain the energy to become real (onshell) quarks, forming hadron jets. These jets are difficult to accurately predict because they are dominated by QCD/strong forces and the perturbative expansion for QCD is divergent, so you need lattice calculations which are inaccurate. So you can't compare what you see with a solid prediction. You can measure what you see, but you can't analyze the data very accurately. The color charge of the QCD jets can't interact with the weak bosons, but the jets also have electromagnetic and weak charges which do interact with weak bosons. So you cannot do a precise theoretical analysis of the entire event. All you can really do is to produce particles and see what they are and how they interact. You can't do a complete theoretical analysis that's accurate enough to deduce electroweak symmetry.

Yes, definitely SU(2) weak symmetry is based on an enormous amount of good empirical evidence: what I'm questioning is "electroweak symmetry". Evidence for the broken and mixed U(1) symmetry and SU(2) symmetry is not at issue. What should be regarded as an open question is whether electroweak symmetry exists. The simplest default alternative to the Higgs-electroweak theory is to have a mixed but broken "electroweak symmetry", i.e. no electroweak symmetry. This is precisely what Feynman argued in the 1980s. Instead of having a Higgs field which makes weak field quanta massive at low energy but massless at high energy, you instead add a quantum gravity gauge theory to the standard model, which gives mass to the weak field quanta at all energies (as well as giving masses to other massive particles). The quantum gravity gauge theory has mass-energy as its charge and it has gravitons as its bosons. In other words, the Higgs/electroweak symmetry theory is a complete red-herring. If its advocates are allowed to continue their propaganda, then there will be no well-developed alternative to the Higgs/electroweak symmetry when the LHC rules out the Higgs. The result will be the usual last-minute panic with a consensus of ill-informed opinions promoting new epicycles to prop up nonsense (save face).

Feynman's opposition to "electroweak symmetry" is in Gleick's biography of Feynman:

When a historian of science pressed him on the question of unification in his Caltech office, he resisted. “Your career spans the period of the construction of the standard model,” the interviewer said.

” ‘The standard model,’ ” Feynman repeated dubiously. . . .

The interviewer was having trouble getting his question onto the table. “What do you call SU(3) X SU(2) X U(1)?”

“Three theories,” Feynman said. “Strong interactions, weak interactions, and the electromagnetic. . . . The theories are linked because they seem to have similar characteristics. . . . Where does it go together? Only if you add some stuff we don’t know. There isn’t any theory today that has SU(3) X SU(2) X U(1) — whatever the hell it is — that we know is right, that has any experimental check. . . . "


Virtual quarks form in pairs due to pair production around the proton. The pairs get knocked free in high energy collision. I do know that individual quarks can't exist by themselves. I wrote that the quarks are produced in pair production, and get knocked free of the field of the proton in a high energy inelastic collision. I didn't write that individual quarks exist alone.

The mass term in the lagrangian always exists, but it doesn't have the same value. If m = 0, that is the same as getting rid of the mass term. Reference is for instance Zee's QFT book. You can't formulate a QFT very conveniently without the field having mass. Sidney Coleman is credited by Zee with the trick of adding a mass term for the massless QED virtual photon field, for example. You have to have a mass term in the field to get the gauge theory lagrangian, but at the end you can set the mass equal to zero. It's a mathematical trick. It's not physics, just math.

The precise reference is Zee, 1st ed., 2003, pp 30-31: "Calculate with a photon mass m and set m = 0 at the end ... When I first took a field theory course as a Student of Sidney Coleman this was how he treated QED in order to avoid discussing gauge invariance." He ends up with an electromagnetic potential of (e^{-mr})/(4 Pi r). The exponential part of this, e^{-mr}, is due to the mass term. Setting m = 0 gives e^{-mr} = 1, so the mass term has no effect, and you get the expected potential for a massless field. By exactly the same argument, mass terms in the weak field need to be eliminated for "electroweak symmetry" by making m = 0 where such symmetry exists. Otherwise, you end up with a weak field potential which has an exponential term (reducing the range and field strength) due to the mass of the weak field quanta. To get "electroweak symmetry", the weak field potential must become similar to the electromagnetic field potential at unification energy. That's the definition of this "symmetry".

Pauli first applied Weyl’s gauge theory to electrodynamics and was well aware that that for electromagnetic interactions, it really doesn’t matter if you have a mass term in the propagator like 1/[(k^2)-(m^2)], because it just represents the momentum delivered by the field boson in the Feynman diagram. You can treat the relativistic field quanta (moving with velocity c) as non-relativistic, allow the rest mass momentum in the propagator to represent the relativistic momentum of photons, and then simply edit out the problem of field quanta mass in the field potential by letting m = 0 in the final stage. This math trick complements the physics of gauge invariance so there is no problem. Pauli however knew that the mass in the propagator is a real problem for non-Abelian fields that carry electric charge, so he objected to the Yang-Mills theory when Yang gave his lecture in 1954. Yang and Mills could not treat the mass of the field and Pauli made such a fuss Yang had to sit down. Electrically charged field quanta can’t propagate without rest mass (their magnetic self-inductance opposes their motion), so they must really have a mass in the propagator, as far as Pauli was concerned. This doesn’t apply to uncharged field quanta like photons, where you don’t need a massive propagator. Now the problem is: how do you get electroweak symmetry with electrically charged, massless SU(2) quanta at electroweak unification energy. As far as I can see, most of the authors of modern physics textbooks ignore or obfuscate the physics (which they mostly disrespect or frankly hate as being a trivial irrelevance in “mathematical physics”). But Noether makes all of the “mathematical symmetries” simple physical processes:

Noether’s theorem: every conservation law corresponds to an invariance or symmetry.

Gauge symmetry: conservation of charge (electric, weak, or color).

Electroweak symmetry: equality of couplings (strengths) of electromagnetic and weak interactions at electroweak unification energy.

Langrangian symmetry or local phase invariance: produced by a lagrangian that varies with changes in the wavefunction, so that emission of field quanta compensate for the energy used to change the wavefunction.

When you switch from describing massive to massless field quanta in electromagnetism, the equation for field potential loses its exponential factor and thus ceases to have short range and weak strength. However, the field quanta still carry momentum because they have energy, and energy has momentum. So there is no problem. Contrast this to the problems with getting rid of mass for SU(2) electrically charged W bosons!

“... concentrate on the hard subprocess, where the real (perturbative) physics is. There, the gamma and the W/Z have similar strengths once you reach virtualities of the order of the boson masses.”

You seem to be arguing is that “electroweak symmetry” is defined by similarity of the strengths of the weak and electromagnetic forces at energies equivalent to the weak boson masses (80 and 91 GeV). There is some confusion in QFT textbooks on exactly what the difference is between “electroweak symmetry” and “electroweak unification”.

At energies of 80 and 91 GeV (weak W and Z boson masses), the electromagnetic (gamma) and W/Z don’t seem to have very similar strengths: http://www.clab.edc.uoc.gr/materials/pc/proj/running_alphas.html

Yes, the electrically neutral Z weak boson has higher mass (91 GeV) than the electrically charged W weak bosons (80 GeV), but that's just because the weak isospin charge coupling (g_W) has a value of only half the weak hypercharge coupling (g_B). The weak hypercharge for left-handed leptons (ie those which actually participate in weak interactions) is always Y = -1, while they have a weak isospin charge Y = +/-1/2. (Forget the right handed lepton hypercharge, because right handed leptons don't participate in weak interactions.) So the weak isospin charge has just half the magnitude of the weak hypercharge! The Weinberg mixing angle Theta_W is defined by:

tan (Theta_W) = (g_W)/(g_B)

The masses of the weak bosons Z and W then have the ratio:

cos (Theta_W) = (M_W)/(M_Z)

Therefore, the theory actually predicts the difference in masses of the Z and W weak bosons from the fact that the isospin charge is half the hypercharge. This is all obfuscated in the usual QFT textbook treatment, and takes some digging to find. You would get exactly the same conclusion for the left-handed weak interaction if you replace weak hypercharge by electric charge for leptons (not quarks, obviously) above. Because isospin charge takes a value of +/-1/2 while electric charge for leptons takes the value +/-1, the ratio of isospin to electric charge magnitude is a half. Obviously for quarks you need an adjustment for the fractional electric charges, hence the invention of weak hypercharge. Physically, this "(electric charge) = (isospin charge) + (half of hypercharge)" formula models the compensation for the physical fact that quarks appear to have fractional electric charges. (Actually, the physics may go deeper than this neat but simplistic formula, if quarks and leptons are unified in a preon model.) I'm well aware of the need for some kind of mixing, and am well aware that the difference in W and Z boson masses was predicted ahead of discover at CERN in 1983.

I'm writing a paper clarifying all this, and it is good to be able to discuss and defend a criticism of electroweak symmetry here, to see what kind of arguments are used to defend it. It will help me to write the paper in a more concise, focussed way. Thank you Alejandro, thanks to Tommaso for tolerating a discussion, and other commentators.

For the record: the essential "tan (Theta_W) = (g_W)/(g_B)" is equation 10.21 in David McMahon's 2008 "QFT Demystified" textbook.

The problem with the usual interpretation of the top quark mass for Higgs boson studies is that to counter this argument, I would have to discuss an alternative theory in detail, instead of just pointing out inconsistencies in the mainstream theory. Then critics will dismiss me as a crackpot and stip listening. But the top quark coupling seems to me to be evidence pointing exactly the other way, towards a quantum gravity gauge theory. The top quark mass fits in perfectly to a simple model for particle masses. The foundation is model for masses was a relationship between the Z boson mass and the electron mass (or similar) in a paper you wrote with Hans de Vries, so thank you for that. To summarize the essentials, we put a quantum gravity gauge group into the standard model in a very neat way (treating it like hypercharge), and remove the Higgs mass model. Mixing gives masses to the massive particles in a very novel way (not). A charged fundamental particle, eg a lepton, has a vacuum field around it with pair production producing pairs of fermions which are briefly polarized by the electric field of the fermion, and this shields the core charge (thus renormalization). The energy absorbed from the field by the act of polarization (reducing the electric field strength observed at long distances) moves the virtual fermions apart, and thus gives then a longer life on average before they annihilate. Ie, it causes a statistical violation of the uncertainty principle: the energy the off-shell (virtual) fermions absorb makes them move closer towards being on-shell. For the brief extra period of time (due to polarization) which they exist before annihilation, they therefore start to feel the Pauli exclusion principle and to behave more like on-shell fermions with a structured arrangement in space. One additional feature of this vacuum polarization effect in giving energy to virtual particles is that they briefly acquire a real mass. So the vacuum polarization has the effect of turning off-shell virtual fermions briefly into nearly on-shell fermions, simply by the energy they absorb from the electric field as they polarize! This vacuum mass and the Pauli exclusion principle have the effect of turning leptons into effectively the nuclei of little atoms, surrounded by virtual fermions which when being polarized add a Pauli exclusion principle structured real mass. It is this vacuum mass effect from the vacuum which is all-important for the tauon and also the top quark. The neutral Z acquires its mass by mixing of SU(2) with a quantum gravity gauge group. http://nige.wordpress.com/2010/05/07/category-morphisms-for-quantum-gravity-masses-and-draft-material-for-new-paper/

Theta_W or θ_W is empirically determined to be 29.3 degrees at 160 MeV energy using the 2005 data from parity violation in Møller scattering (sin^2 θ_W = 0.2397 ± 0.0013 was obtained at 160 MeV) and it was determined to 28.7 degrees at 91.2 GeV energy in 2004 data using the minimal subtraction renormalization scheme (sin^2 θ_W = 0.23120 ± 0.00015). This difference is usually cited as evidence of the running of the Weinberg angle with energy, due to the running coupling which is caused by vacuum polarization (shielding the core charges, which is a bigger effect at low energy than at high energy). See http://en.wikipedia.org/wiki/Weinberg_angle

What I stated was that, ignoring the running coupling effect (which is smaller for the weak isospin field than in QED, because of the weakness of the weak force field relative to QED), the Weinberg angle is indeed

tan θ_W =1/2.

This is gives θ_W = 26.57 degrees. Remember, empirically it is 29.3 degrees at 160 MeV and it is 28.7 degrees at 91.2 GeV. The higher the energy, the less vacuum polarization we see (we penetrate closer to the core of the particle, and there is therefore less intervening polarized vacuum to shield the field) Therefore, the figure for higher energy, 28.7 degrees is predicted to be closer to the theoretical bare core value (26.57 degrees) than the figure observed at low energy (29.3 degrees). The value of θ_W falls from 29.3 degrees at 160 MeV to 28.7 degrees at 91.2 GeV, and to an asymptotic value for the bare core of 26.57 degrees at much higher energy.

Yes, there must be a mixing of SU(2) and U(1). But no, I've never been against such a mixing. My incomplete draft paper from last October explains what I mean: http://nige.files.wordpress.com/2010/10/paper-draft-pages-1-5-2-oct-2010.pdf (ignore underlined Psi symbols; they should have an overbar). My argument is that the mathematics of the Standard Model are being misapplied physically. The electroweak unification is achieved by mixing SU(2) with U(1) but not anywhere near the way it is done in the Standard Model. SU(2) is electroweak symmetry: the three gauge bosons exist in massless and massive forms. Massless charged bosons can't propagate unless the magnetic self inductance is cancelled, which can only happen in certain circumstances (e.g. a perfect equilibrium of exchange between two similar charge, so that the charged bosons going in each opposite direction have magnetic vectors than cancel one another, preventing infinite self-inductance, just electromagnetic energy in a light velocity logic step propagating along a two-conductor power transmission line). This effectively makes electric charge the extra polarizations that virtual photons need to account for attraction and repulsion in electromagnetism. The massive versions of those SU(2) bosons are the weak bosons, and arise not from a Higgs field but from a U(1) hypercharge/spin-1 quantum gravity theory.

There is a massive error of the Standard Model's CKM parameter matrix in the "electroweak" theory, which has the contradiction that when a lepton like a muon or tauon decays, it decays via the intermediary step of a weak gauge boson to give a lepton, but when a quark decays it doesn't delay into a lepton via the weak gauge boson, but instead into another quark: http://nige.files.wordpress.com/2010/08/diagram1.jpg. See
http://nige.wordpress.com/2010/05/07/category-morphisms-for-quantum-gravity-masses-and-draft-material-for-new-paper/ and
http://nige.wordpress.com/2010/06/29/professor-jacques-distler-disproves-the-alleged-anomaly-in-beta-decay-analysis/. When you correct this theoretical beta decay analysis error, all of the problems of the Standard Model evaporate and you get a deep understanding (this draft PDF paper is incomplete and underlined Psi symbols should have overbars, but most of the rest of the theory is on other blog posts).

www.quantumfieldtheory.org

“... it comes about that, step by step, and not realizing the full meaning of the process, mankind has been led to search for a mathematical description ... mathematical ideas, because they are abstract, supply just what is wanted for a scientific description of the course of events. This point has usually been misunderstood, from being thought of in too narrow a way. Pythagoras had a glimpse of it when he proclaimed that number was the source of all things. In modern times the belief that the ultimate explanation of all things was to be found in Newtonian mechanics was an adumbration of the truth that all science as it grows towards perfection becomes mathematical in its ideas. ... In the sixteenth and seventeenth centuries of our era great Italians, in particular Leonardo da Vinci, the artist (born 1452, died 1519), and Galileo (born 1564, died 1642), rediscovered the secret, known to Archimedes, of relating abstract mathematical ideas with the experimental investigation of natural phenomena. Meanwhile the slow advance of mathematics and the accumulation of accurate astronomical knowledge had placed natural philosophers in a much more advantageous position for research. Also the very egoistic self-assertion of that age, its greediness for personal experience, led its thinkers to want to see for themselves what happened; and the secret of the relation of mathematical theory and experiment in inductive reasoning was practically discovered. ... It was an act eminently characteristic of the age that Galileo, a philosopher, should have dropped the weights from the leaning tower of Pisa. There are always men of thought and men of action; mathematical physics is the product of an age which combined in the same men impulses to thought with impulses to action.”

- Dr Alfred North Whitehead, An Introduction to Mathematics, Williams and Norgate, London, revised edition, undated, pp. 13-14, 42-43.


Einstein's tensors (second order differential equations) presuppose a classical distribution of matter and a classical, continuously acting acceleration. Einstein and others have problems with the fact that all mass and energy is particulate, in setting up the stress-energy tensor (gravity charge for causing spacetime curvature) in general relativity. How do we use a tensor formulation, that can only model a continuous distribution of matter, to represent discrete particles of mass and energy?

Simple: we don't. They average out the density of discrete particle mass-energy in a volume of space by replacing it with the helpful approximation of an imaginary "perfect fluid" which is a continuum, not composed of particles. So all the successes of general relativity are based on lying, averaging out the discrete locations of quanta in a volume, to feed into the stress-energy tensor. If you don't do this lie, general relativity fails completely: for discrete point-like particles in the stress-energy tensor, the curvature takes just two possible values, both of them unreal (zero and infinity!). So general relativity is just a classical approximation, based on lying about the nature of quantum fields and discrete particles!

"In many interesting situations… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighbouring fluid elements is pressure."

- B. Schutz, A First Course in General Relativity, Cambridge University Press, 1986, pp. 89-90.




However, there is one thing that Einstein did do that was a step beyond Newton in general relativity, which is explained well at http://www.mathpages.com/home/kmath103/kmath103.htm:



It is this "trace" term that Einstein had to introduce to make the stress-energy tensor's divergence zero (satisfying the conservation of mass-energy) that makes light deflect twice as much more due to gravity than Newton's law predicts. But as Feynman showed in the final chapter to the second edition (not included in the first edition!) of the second volume of his "Lectures on Physics", this special feature of curved spacetime is simple to understand as being a gravitational field version of the Lorentz-FitzGerald contraction. Earth's radius is contracted by (1/3)MG/c2 = 1.5 millimetres to preserve mass-energy conservation in general relativity. Just as Maxwell predicted displacement current by looking physically at how capacitors with a vacuum for a dielectric allow current to flow through a circuit while they charge up, you don't need a physically false tensor system to predict this. The fact that Maxwell used physical intuition and not mathematics to predict displacement current is contrary to the lying revisionist history at http://www.mathpages.com/home/kmath103/kmath103.htm, the author of which is apparently ignorant of the fact that Maxwell never used vector calculus (which was an innovation due to self-educated Oliver Heaviside, a quarter century later), messed up his theory of light, never unified electricity and magnetism consistently despite repeated efforts, and came up with an electrodynamics which (contrary to Einstein's ignorant claims in 1905 and for fifty years thereafter) is only relativistic for a (non-existent) "zero action" approximation, and by definition fails to be relativistic for all real-world situations (that comprise of small not non-zero actions which vary as a function of the coordinate system and thus motion, and so are not generally invariant). You don't need tensors to predict the modifications to Newtonian gravity that arise when conservation of mass-energy in fields is included; you don't need general relativity to predict the excess radius that causes the apparent spacetime curvature, because a LeSage type quantum gravity predicts that spin-1 gravitons bombarding masses will compress them, explaining the contraction. And a light photon deflects twice as much due to a perpendicular gravity field than slow-moving bullets deflect, because of the Lorentz-FitzGerald contraction of the energy in the light photon: 100% of the energy is in the plane of the gravitational field, instead of just 50% for a bullet. So light photons interact twice as strongly with the gravity field. There is no magic!

Prediction of gravitational time-dilation

When light travels through a block of glass it slows down because the electromagnetic field of the light interacts with the electromagnetic fields in the glass. This is why light is refracted by glass. Light couples to gravitational fields as well as electromagnetic. The gravitational time dilation from the Einstein field equation is proved in an earlier blog post to be simply the same effect. The gravitons are exchanged between gravitational charges (mass/energy). Therefore, the concentration of gravitons per cubic metre is higher near mass/energy than far away. When a photon enters a stronger gravitational field, it interacts at a faster rate with that field, and is consequently slowed down. This is the mechanism for gravitational time dilation. It applies to electrons and nuclei, indeed anything with mass that is moving, just as it applies to light in a glass block. If you run through a clear path, you go faster than if you try to run through a dense crowd of people. There's no advanced subtle mathematical “magic” at work. It’s not rocket science. It’s very simple and easy to understand physically. You can’t define time without motion, and motion gets slowed down by dense fields just like someone trying to move through a crowd.

Length contraction with velocity and mass increase by the reciprocal of the same factor are simply physical effects as FitzGerald and Lorentz explained. A moving ship has more inertial mass than its own mass, because of the flow of water set up around it (like "Aristotle's arrow", fluid moving out at the bows, flows around the sides and pushes in at the stern). As explained in previous posts, the "ideal fluid" aproximation for the effect of velocity on the drag coefficient of an aircraft in the 1920s was predicted theoretically to be the factor (1 - v2/c2)-1/2, where c is the velocity of sound: this is the "sound barrier" theory. It breaks down because although the shock wave formation at sound velocity carries energy off rapidly in the sonic boom, it isn't 100% efficient at stopping objects from going faster. The effect is that you get an effective increase in inertial mass from the layer of compressed, dense air in the shock wave region at the front of the aircraft, and the nose-on force has a slight compressive effect on the aircraft (implying length contraction). Therefore, from an idealized understanding of the basic physics of moving through a fluid, you can grasp how quantum field theory causes "relativity effects"!

"… light ... “smells” the neighboring paths around it, and uses a small core of nearby space."

- Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.




Above: classical light "wave" illustration from Wikipedia. Most people viewing such diagrams confuse the waving lines with axes labelled field strengths in a single physical dimension, for field lines waving in three dimensional space! Don't confuse field strength varying along one axis for a field line waving in two dimensions. It's interesting that field lines are just a mathematical convenience or abstract model invented by Faraday, and are no more real in the physical sense than isobars on weather maps or contour lines on maps. If you scatter iron filings on a piece of paper held over a magnet several times, the absolute positions of the apparent "lines" that the filings clump along occur in randomly distributed locations, although they are generally spaced apart by similar distances. The random "hotspot" locations in which high random concentrations of the first-deposited filings land, form "seeds", which - under the presence of the magnetic field - have induced magnetism (called paramagnetism), which attract further filings in a pole-to-pole arrangement that creates the illusion of magnetic field lines.

This classical theory of light (the diagram is a colour one of the version in Maxwell's original Treatise on Electricity and Magnetism, final 3rd ed., 1873) is wrong: it shows fields along a single, non-transverse, dimension: a longitudinal "pencil of light" which violates the experimental findings of the double slit experiment! (If you look, you will see only one spatial direction shown, the z axis! The apparent y and z axes are not actually spatial dimensions but just the electric E and magnetic B field strengths, respectively! You can draw a rather similar 3-dimensional diagram of the speed and acceleration of a car as a function of distance, with speed and acceleration plotted as if they are dimensions at right angles to the distance the car has gone. Obfuscating tomfoolery doesn't make the graph spatially real in three dimensions.)

The real electromagnetic photon, needed to explain the double slit experiment using single photons (as Feynman shows clearly in his 1985 book QED), is entirely different to Maxwell's classical photon guesswork of 1873: it is spatially extended in a transverse direction, due to the reinforcement of multiple paths (in the simultaneous sum of histories) where the action of the paths is small by comparison to about 15.9% of Planck's constant (i.e., to h-bar or h divided by twice Pi). However, this quantum theory path integral theory of the light photon is today still being totally ignored in preference to Maxwell's rubbish in the ignorant teaching of electromagnetism. The classical equations of electromagnetism are just approximations valid in an imaginary, unreal world, where there is simply one path with zero action! We don't live in such a classical universe. In the real world, there are multiple paths, and we have to sum all paths. The classical laws are only "valid" for the physically false case of zero action, by which I mean an action which is not a function of the coordinates for motion of the light, and which therefore remains invariant of the motion (i.e. a "pencil" of light, following one path: this classical model of a photon fails to agree with the results of the double slit diffraction experiment using photons fired one at a time).

To put that another way, classical Maxwellian physics is only relativistic because its (false) classical action is invariant of the coordinates for motion. As soon as you make the action a variable of the path, so that light is not a least-action phenomenon but instead takes a spread of actions each with different motions (paths), special relativity ceases to apply to Maxwell's equations! Nature isn't relativistic as soon as you correct the false classical Maxwell equations for the real world multipath interference mechanism of quantum field theory on small scales, precisely because action is a function of the path coordinates taken. If it wasn't a function of the motion, there would simply be no difference between classical and quantum mechanics. The invariance of path action as a false classical principle and its variance in quantum field theory is a fundamental fact of nature. Just learn to live with it and give up worshipping Dr Einstein's special relativity fraud!

Thus, in quantum field theory we recover the classical laws by specifying no change in the action when the coordinates are varied, or as Dirac put it in his 1964 Lectures on Quantum Mechanics (Dover, New York, 2001, pp. 4-5):

"... when one varies the motion, and puts down the conditions for the action integral to be stationary, one gets the [classical, approximately correct on large-scales but generally incorrect on small scales] equations of motion. ... In terms of the action integral, it is very easy to formulate the conditions for the theory to be relativistic [in the real contraction, FitzGerald-Lorentz-Poincare spacetime fabric, emergent relativity mechanism, not Einstein's damnable lies against a quantum field existing in the vacuum; remember Dirac's public exposure of Einstein's damned lies in his famous Nature v168, 1951, pp. 906-7 letter, "Is there an aether?": ‘Physical knowledge has advanced much since 1905, notably by the arrival of quantum mechanics, and the situation has again changed. If one examines the question in the light of present-day knowledge, one finds that the aether is no longer ruled out by relativity, and good reasons can now be advanced for postulating an aether. . . . Thus, with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’!]: one simply has to require that the action integral shall be invariant. ... [this] will automatically lead to equations of motion agreeing with [Dirac's aether-based] relativity, and any developments from this action integral will therefore also be in agreement with [Dirac's aether-based] relativity."


Classical physics corresponds falsely to just the path of least action, or least time, whereas real ("sum over multiple path interference") physics shows us that even in simple situations, light does not just follow the path of least action, but the energy delivered by a photon is actually spread over a range of paths with actions that are small compared to h-bar, but are not zero! There is a big difference between a path having zero action and a spread of paths having actions which are not zero but merely small compared to h-bar! This "subtle" difference (which most mathematical physicists fail to clearly grasp even today) is, as Feynman explained in his 1985 book QED, the basis of the entirely different behaviour of quantum mechanics from the behaviour of classical physics!

We have experimental evidence (backed up with a theory which correctly predicts observed force couplings) that the force-causing off-shell radiation of the vacuum isn't a one-way inflow, but is falling in to the event horizon radius of a fundamental particle, then being re-emitted in the form of charged (off-shell) Hawking exchange radiation. The reflection process in some sense is analogous to the so-called normal reflection of on-shell light photons by mirrors, as Feynman explained in QED in 1985. Light isn't literally reflected by a mirror, as Feynman showed by graphical illustration of path integral phase amplitude summation in QED (1985), light bounces off a mirror randomly in all directions and all paths of large action have random phase amplitudes which cancel one another out, leaving just paths with small path actions to add together coherently. The path integral for off-shell virtual photons (gauge bosons) is exactly the same. They go everywhere, but the net force occurs in the direction of least action, where their phases add together coherently, rather than cancelling out at random! The effective reflection of similarly charge polarized gauge bosons between similar charges is just the regular exchange process as depicted in basic (non-loopy) Feynman diagrams for fundamental interactions.






Above: electric fields carry electric charge (nobody has ever seen the core charge of an electric field), although this is contrary to the mainstream reasoning based on historical accident, which assumes that the virtual photons of electromagnetism are electrically neutral and are distinguished for positive and negative fields due to some magical, unobservable extra polarizations! It's obvious that charged massless exchange radiation can propagate simultaneously along paths in opposite directions (although it can't go along a one-way path only, due to infinite magnetic self-inductance at light velocity!), because of the cancellation of the superimposed magnetic field vectors as shown in the diagram above (for theoretical introduction, see linked paper here, although note that underlying of Psi symbols should be overbars).

Wow. You'd think this would be immediately taken up in education and the media and explained clearly to the world, wouldn't you? No chance! What's wrong is that Feynman's 1985 book QED is simply ignored. When Feynman first tried to publish his simple "Feynman diagrams" and his multipath interference theory of quantum mechanics at the Pocono conference in 1948, he was opposed bitterly by the old 1st quantization propagandarists like Niels Bohr, Oppenheimer, Pauli, and many others. They thought he didn't understand Heisenberg's uncertainty principle! They hated the idea of simple Feynman diagrams to guide physical understanding of nature by allowing the successive terms in the path integral's perturbative expansion to be given a simple physical meaning and mechanism, in place of obscure, obfuscating guesswork pseudo-mathematical physics. They hated progress then. People still do!

In 2002 and 2003 I wrote two papers in the Electronics World journal (thanks to the kind interest or patience of two successive editors) about a sketchy quantum field theory that replaces, and makes predictions way beyond, the Standard Model. Now in 2011, we can try an alternative presentation to clarify all of the technical details not by simply presenting the new idea, but by going through errors in the Standard Model and general relativity. This is because, after my articles had been published and attacked with purely sneering ad hominem "academic" non-scientific abuse, Leslie Green then wrote a paper in the August 2004 issue of the same journal, called "Engineering versus pseudo-science", making the point that any advance that is worth a cent by definition must conflict with existing well established ideas. The whole idea that new ideas are supplementary additions to old ideas is disproved time and again. The problem is that the old false idea will be held up as some kind of crackpot evidence that the new idea must be wrong. Greene stated in his paper:

"The history of science and technology is littered with examples of those explorers of the natural world who merely reported their findings or theories, and were vehemently attacked for it. ... just declaring a theory foolish because it violates known scientific principles [e.g. Aristotle's laws of motion, Ptolemy's idea that the sun orbits the earth, Kelvin's stable vortex atoms of aether, Einstein's well-hyped media bigotry - contrary to experimental evidence - that quantum field theory is wrong, Witten's M-theory of a 10 dimensional superstring brane on an 11 dimensional supergravity theory, giving a landscape 10500 parallel universes, etc.] is not necessarily good science. If one is only allowed to check for actions that agree with known scientific principles, then how can any new scientific principles be discovered? In this respect, Einstein's popularisation of the Gedankenexperiment (thought-experiment) is potentially a backward step."


So the only way to get people to listen to facts is to kill the rubbish holding them back from being free to think about a vital innovation that breaks past the artificial barriers imposed by mainstream groupthink ideology and its suppressive and corrosive treason to the case of genuine scientific advance and human progress in civilization.



Fig. 1a: the primary Feynman diagram describing a quantum field interaction with a charge is similar for mathematical modelling purposes for all of the different interactions in the Standard Model of particle physics. The biggest error in the Standard Model is the assumption that the physically simplest or correct model for electromagnetism is an Abelian gauge theory in which the field is mediated by uncharged photons, rather than a Yang-Mills theory in which the field carries charge. This blog post will explain in detail the very important advantages to physics to be obtained by abandoning the Abelian theory of electromagnetism, and replacing it by a physically (but not mathematically) simpler Yang-Mills SU(2) theory of electromagnetism, in which the massless field quanta can be not merely neutral, but can carry either positive or negative electric charge. (Source: Exchange Particles internet page. For clarity I've highlighted an error in the direction of an arrow in the weak interaction diagram; this is of course nothing to do with the error in electromagnetism which I'm describing in this post.)

Note also the very important point for high-energy physics where particles approach very closely into field strengths exceeding Schwinger's 1.3 x 1018 volts/meter cutoff for vacuum fermion pair production, i.e. spacetime annihilation and creation "loops" as shown on a Feynman diagram, have been excluded for these simplified diagrams. In understanding the long range forces pertinent to the kind of low energy physics we see everyday, we can usually ignore spacetime loops in Feynman diagrams, because in QED the biggest effect for low energy physics is from the simplest Feynman diagram, which doesn't contain any loops. The general effect of such spacetime loops due to pair-production at high energies is called "vacuum polarization": the virtual fermions suck in energy from the field, depleting its strength, and as a result the average distance between the positive and negative virtual fermions is increased slightly owing to the energy they gain from their polarization by the field. This makes them less virtual, so they last slightly longer than predicted by Heisenberg's uncertainty principle, before they approach and annihilate back into bosonic field quanta. During when gaining extra energy from the field, they modify the apparent strength of the charge as seen at lower energies or longer distances, hence the need to renormalize the effective value of the charge for QFT calculations, by allowing it to run as a function of energy. In QCD there is gluon antiscreening, which we explained in previous posts is due to the creation of gluons to accompany virtual hadrons created by pair production in very strong electric fields, so the QCD running coupling at the highest energies runs the opposite way to the QED running coupling. Field energy must be conserved, so the QED field loses energy, the QCD field gains energy, hence asymptotic freedom for quarks over a certain range of distances. This total field energy conservation mechanism is completely ignored by QFT textbooks! As the virtual fermions gain some real energy from the field via the vacuum polarization, they not only modify the apparent charge of the particle's core, but they also get modified themselves. Onshell fermions obey the Pauli exclusion principle. Thus, the virtual fermions in strong fields can actually start to become structured like electron shells around the particle core. This mechanism for vacuum structuring, as shown in earlier blog posts, gives rise to the specific discrete spectrum of fundamental particle masses, a fact that has apparently led to the repeated immediate deletion of arXiv-submitted papers, due to ignorance, apathy, and hostility of mainstream physicists towards checkable, empirically based mechanisms in QFT. Elitist superstring theorists preach (off the record, on Dr Lubos Motl's superstring theory blog, or in anonymous sneering comments) that all of this progress is merely "heuristically based" physics, that such experimentally guided theory makes them sick, is mathematically naive, inelegant or repulsive, and that it would "just" reduce physics to a simple mechanical understanding of nature that the person in the street could grasp and understand. (Amen to that last claim!)



Fig. 1b: Maxwell's equations (Maxwell wrote them in long-hand first-order differential term summarizing the laws of Gauss, Ampere and Faraday with the addition of his own, now textbook-obfuscated, law of "displacement current" through the aether for the vital case of open circuits, e.g. the effects of net energy transfer through space from of accelerating and decelerating currents in the plates of a charging or discharging of a capacitor which has a vacuum as its "dielectric"; the advanced curl and div operator notation was introduced by self-taught mathematical physicist Oliver Heaviside) contradict reality experimentally in what is called the Aharonov–Bohm effect (or Ehrenberg–Siday–Aharonov–Bohm effect). The failure of Maxwell's equations is their neglect of energy in fields in general, and neglect of the conservation of energy in supposedly "cancelled" fields in particular! E.g., inside a block of glass through which light travels, there is positive electric field energy density from atomic nuclei and negative electric field energy density from orbital electrons. The two fields superimpose and neatly "cancel", leaving no effect according to Maxwell's equations (which don't predict the variation of relativity permittivity as a function of "cancelled" fields!). So why does light slow down and thus refract in glass? Answer: the energy density of the "cancelled" electric fields is still there, and "loads" the vacuum. The photon's electromagnetic field interacts with the electromagnetic energy in the glass, and this slows it and can deflect its direction. All you can do with Maxwell's equations to allow for this is to make an ad hoc modification to the permittivity of the vacuum, fiddling with the "constants" in the equation to make it agree with experiments! The same effect applies to magnetic fields, as the experimental confirmation of the Aharonov–Bohm effect proves. To correct Maxwell's equations, we replace them with a similarly first-order but more comprehensive "field potential" vector which includes a term that allows for the energy of cancelled fields in the vacuum. Note, however, that this modification to Maxwell's equations under some conditions leads to conflicts with "special relativity". E.g., if the zero point vacuum itself is viewed as consisting of "cancelled" field energy by analogy to a block of glass, then the modified Maxwell equations no longer necessarily necessitate the principle of special relativity, but under some circumstances necessitate absolute motion instead. This fact is usually obfuscated either to defend mathematical mysticism in theoretical physics, or to "protect Einstein's authority", much as people used to reject Newton's laws in deference to the more-ancient "authority" of Aristotle's laws of motion.

The problem that the zero-point electromagnetic energy in the vacuum might constitute an absolute frame of reference due to gravitational effects is clearly stated by Richard P. Feynman and Albert R. Hibbs, Quantum Mechanics and Path Integrals, Dover, New York, corrected edition, 2010, page 245:

"... if we were to sum this ground-state energy over all of the infinite number of possible modes of ever-increasing frequency which exist even for a finite box, the answer would be infinity. This is the first symptom of the difficulties which beset quantum electrodynamics. ... Suppose we choose to measure energy from a different zero point. ... Unfortunately, it is really not true that the zero point of energy can be assigned completely arbitrarily. Energy is equivalent to mass, and mass has a gravitational effect. Even light has a gravitational effect, for light is deflected by the sun. So, if the law that action equals reaction has qualitative validity, then the sun must be attracted by the light. This means that a photon of energy {h-bar}*{omega} has a gravity-producing effect, and the question is: Does the ground-state energy term {h-bar}*{omega}/2 [this assumes two modes per k] also have an effect? The question stated physically is: Does a vacuum act like a uniform density of mass in producing a gravitational field?"


On page 254, they point out that if the charged and neutral Pi mesons differ only in charge, then their observed differences in mass (the charged Pi meson has a greater mass than the neutral Pi meson) implies that this extra mass in the case of a charged particle comes from "the different way they couple to the electromagnetic field. So presumably the mass difference ... represents energy in the electromagnetic field." Using the same cutoff that works here for the electromagnetic field of an electron, on page 255 they find that the corresponding correction to the mass of the electron for electromagnetic field interactions "is only about 3 percent, but there is no way to test this, for we do not recognize a neutral counterpart to the electron." As we pointed out since 1996, there are two separate long-range zero-point fields in the vacuum: gravitational (gravitons) and electromagnetic (off-shell photons), with very different energy densities due to the factor of 1040 difference in their long-distance couplings (the coupling at the low-energy IR cutoff limit, i.e. asymptotic limit of the running coupling that is valid for the low-energy physics domain, below ~1 MeV kinetic energy). The confusion in the value of the pseudo "cosmological constant" from the zero point vacuum comes from confusing the immense electromagnetic field energy density of the vacuum for the relatively tiny gravitational field energy density of the vacuum. It is the latter, manifested (as we proved effectively in 1996) by spin-1 gravitons, which causes the small observed cosmological acceleration of the universe, a ~ Hc. This is so because electric charge comes in two forms which balance, preventing long-range electromagnetic forces in the universe, whereas all observed gravitational charge has the same single sign and cannot cancel out. Gravitation thus pushes the matter apart (over long distances), causing cosmological acceleration. (On relatively small distance scales, the shielding of an observer by the presence of a relatively nearby mass, from the immense convergence of exchange gravitons with the surrounding isotropic universe, pushes the observer towards the nearby mass. The details of this have been carefully checked and confirmed to experimental accuracy!)



Fig. 1c: the SU(2) Yang-Mills field strength equation for electromagnetism utilizing massless charged field quanta reduces to the Maxwellian U(1) equation (equivalent to uncharged gauge bosons) under all necessary conditions, because of the motion-denying magnetic self-inductance of charged massless field quanta of SU(2). Note that the transfer of electric charge by Yang-Mills gauge bosons is not unaccompanied by a force. The charged gauge bosons carry both force-causing energy and charge. SU(2) includes one neutral boson as well as two charged bosons, so the neutral boson can deliver forces without carrying charge. SU(2) is thus a rich mathematical theory that can do a lot, and it is tempting with massless exchange radiation to attribute the neutral boson to graviton and the charged ones to electromagnetism (with left-handed interacting massive versions also existing to produce weak interactions). An addictive "drunkards walk" of charged massless gauge bosons between the ~1080 real fermion pairs in the universe the produces a path integral resultant that "conveniently" predicts the low-energy electromagnetism coupling IR limit to be (~1080)1/2 stronger than gravitation, because the neutral bosons (gravitons) don't undergo such an addictive path integral! However, the theory is stronger than such superficial conveniences suggest, because it also predicted two years ahead of observation the correct observed cosmological acceleration of the universe, and vice-versa, it predicts the observed gravitational coupling (not using the 1040 factor just mentioned). It turns out that the simplest fully-consistent theory of nature has the graviton emerge from U(1) hypercharge which mixes with the neutral massless gauge boson of SU(2). Ignorant critics may claim that this correct limit proves that the SU(2) model is unnecessary under Occam's Razor since for most cases it reduces to U(1) for practical calculations in electromagnetism, but this is a false criticism. The SU(2) electromagnetic theory is necessary to properly understand the relationship between electromagnetism and weak interactions (only left-handed interacting spin field quanta effectively acquire mass and partake in weak interactions)! The Abelian U(1) theory is a hypercharge which - when mixed with SU(2) - gives rise to the masses of the weak field quanta and also gives rise to a neutral field quantum, a spin-1 graviton. This is necessary. The spin-1 graviton pushes masses together, and this was falsely rejected by Pauli and Fierz in 1939 on the basis of a hidden implicit assumption which has been proved false. The currently fashionable claim that, because Maxwell's equations are rank-1 tensors and general relativity's Ricci tensor curvature is rank-2, electromagnetic field quanta are spin-1 and gravitons are spin-2, is a complete fraud; it is an expression of the most puerile physical and mathematical confusion between physical reality and the different mathematical models that can be used to represent that physical reality. We can, for instance, express electromagnetic forces in terms of rank-2 curvature equations. We don't, not because this is the "wrong" thing to do, but because it is unnecessary, and it is far more convenient to use rank-1 equations (divs and curls of Faraday's "field lines").

Regarding mathematics being confused for reality, the great Eugene Wigner in 1960 published a paper called "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", Communications in Pure and Applied Mathematics, vol. 13, No. I. It's mainly hand-waving groupthink fashion, that is a "not even wrong" confusion between reality and continuously-evolving mathematical models that are just approximations, e.g. differential equations for wavefunctions in quantum mechanics implicitly assume that wavefunctions are continuously variable - not discretely variable - functions, which disagrees with the physical premise of quantum field theory, namely that every change in the state of a particle is a discrete event! (This definition, of a change of "state" as being a discrete change, doesn't include purely rotational phase amplitudes due to the spin of a particle which has a transverse polarization; the wavefunction for phase amplitude will be a classical-type continuous variable, but other properties such as accelerations involve forces which in quantum field theory are mediated by discrete particle interactions, not continuous variables in a spacetime continuum.)

The first false specific claim that Wigner makes in his paper is his allusion, very vaguely (the vagueness is key to his confusion), to the fact that the integral of the Gaussian distribution, exp(-x2), over all values of x between minus infinity and plus infinity, is equal to the square root of Pi. He feels uneasy that the square root of Pi, the ratio of the circumference to diameter of a circle, is the result of a probability distribution. However, he ignores the fact that there is -x2 in the natural exponent, so this is a natural geometric factor. If x is a scaled distance, then x2 is an area, and you're talking geometry. It's no longer simply a probability that is unconnected to geometry. For example, the great RAND Corporation physicist Kahn in Appendix I to his 1960 thesis on deterrence, On Thermonuclear War, shows that the normal or Gaussian distribution applies to the effect of a missile aimed at a target; the variable x is the ratio of distance from intended ground zero, to the CEP standard error distance for the accuracy of the missile. We see in this beautiful natural example of falling objects hitting targets that the Gaussian distribution is implicitly geometric, and it is therefore no surprise that its integral should contain the geometric factor of Pi.

The circular area that objects fall into is the product Pi*r2 where r is radius, which is directly proportional to the scaled radius, x. This is mathematically why the square root of Pi comes out of the integral of exp(-x2) over x from minus to plus infinity (i.e., over an infinitely extensive flat plane, that the objects fall upon). Quite simply, the Gaussian distribution law fails to include the factor Pi in its exponent, so you get the square root of Pi coming out of the integral (thus the square root of Pi is the normalization factor for the Gaussian distribution in statistics). If only the great Gauss in 1809 had half a brain and knew what he was doing, he'd have included the Pi factor in the exponent, giving an integral output of 1, so we wouldn't get the fictitious square root of Pi! The Gaussian or normal distribution is just the plain old negative exponential distribution of coin-tossing, with relative area as its variable! It's therefore simply the insertion of area as the variable that introduces Pi (either directly in the exponent, or else as the square root of Pi in the integral result and related normalization factor). The error of Wigner was in not recognising that the square of dimensionless relative radius, x2, needs to be accompanied by the equally dimensionless geometric factor Pi, in the negative exponent. It is a classic error of theoretical physicists to believe, on the basis of a mistaken understanding of dimensional analysis, that dimensionless geometric conversion factors like Pi only apply to dimensionful, absolute distances or areas, not to relative distances or areas. In fact, factors like Pi obviously also apply to dimensionless relative measures of distance or area, because it is self-evident that if the radius of a circle is one dimensionless unit, then its area is obviously Pi dimensionless units, and not one dimensionless unit, as confused people like Gauss and Wigner believed with their obfuscating formula for the normal distribution!

Statistician Stephen M. Stigler (best known for Stigler's law of eponymy) first suggested replacing the Gaussian distribution exp(-x2) with exp(-Pi*x2) in his 1982 paper, "A modest proposal: a new standard for the normal", The American Statistician v36 (2). However, Stigler was too modest and therefore failed to make the point with sufficient physical force to get the world's mathematics teachers and users to dump Laplace's and Gauss's obfuscating, fumbling nonsense and make statistics physically understandable to clear-thinking students. So even today, Wigner's lie continues to be believed by the fashionable groupthink ideology of pseudo-mathematical physics prevailing in the world, as the following illustration indicates (note that the hoax began with Laplace, who infamously claimed that God was an unnecessary hypothesis in his crackpot mathematics!!!):



Wigner also ignores the fact that the mathematical concept of Pi is ambiguous in physics because of excess radius of mass in general relativity; general relativity and quantum gravity predict that around a spherical mass M, its radius shrinks by excess radius (1/3)MG/c2 metres, but the transverse direction (circumference) is unaffected, thus varying Pi unless there is curved spacetime. Since curved spacetime seems to be a classical large-scale approximation incompatible on the deeper level with quantum fields, where all actions consist of not of continuously variable differential equations but rather of a series of discrete impulsive particle interactions, it appears that the "excess radius" effect proves that the mathematical textbook value of Pi is wrong, and the real value of Pi is a variable quantity, which is the effect of the gravitational field warping spatial dimensions. Wigner simply ignores this mathematical failure of Pi, implicitly assuming that the textbook formula is correct. Actually, nobody verified the textbook formula precisely to more than a few significant figures, and since gravity is so small, the variation in Pi is small. So the point remains: mathematics has nothing to do with physics, beyond constituting a puerile tool or model for imperfect but often helpful calculations and is a danger in leading to arcane worship as an alternative to religion, a problem that goes back to the very roots of mathematics in the Egyptian priesthood and in the Greek Pythagorean cult.

The failure of mathematics to make deterministic predictions precisely even for classical systems like the collision of three balls in the "three body problem" which is beyond Newton's laws, shows this mathematical failure so very clearly. Newton only came up with laws of motion that are deterministic when applied to an artificially simplistic situation which never really occurs precisely in our universe! Even if you tried to collide two balls in the vacuum of space, particles of radiation would affect the outcome! Nature isn't mathematical! It's physical. So Pi isn't really the ratio of the circumference of a circle to its diameter; that's only an approximation!

Wigner's "mathematical reality" ideology nearly cost America the vital Nagasaki plutonium bomb that finally convinced Japan to agree to a conditional surrender without a horrific million plus casualties in an invasion of Japan, after Hiroshima and the Russian declaration of war against Japan failed. Wigner designed the plutonium production reactors but arrogantly tried to prevent the engineers from enlarging the core size to allow for unknowns. He raged that the engineers were ignorant of the accuracy of the cross-sections for fission and the accuracy of the mathematical physics of nuclear chain reactions, and were delaying plutonium production by insisting on bigger reactor cores than were needed. After the first reactor started up, it shut itself down a few hours later. Wigner's data on the 200 fission products had been incomplete, and it turned out that some fission products like Xe-135 had large cross-sections to absorb neutrons, so after a few hours enough had been produced to "poison" the chain reaction. It was only because the engineers had made the cores bigger than Wigner specified, knowing that mathematical physics predictions are often wrong, that they were able to overcome the poisoning by adding extra uranium to the core to keep it critical!



Fig. 1d: he was unable to understand the immoral perils of relativism in blocking progress in physics, and was unable to understand the simplicity of physical mechanisms for fundamental forces, but at least Einstein was able to make the equations look pretty and attractive to the children who have only learned to count up to the number three, and who like patterns and very simple equations (a PDF version of above table is linked here, since I can't easily put Greek symbols into html blog posts that will display correctly in all browsers; notice that the top-left to bottom-right diagonal of zero terms are the trace of the tensor, which is zero in this case). Actually, using the field tensor formulation to represent the various components of electric and magnetic fields, is quite a useful - albeit usually obfuscated - reformulation of Maxwell's equations. However, mathematical models should never be used to replace physical understanding of physical processes, e.g. by deliberate attempts to obfuscate the simplicity of nature. If you're not blinded by pro-tensor hype, you can see an "anthropic landscape" issue very clearly with Einstein's tensor version of Maxwell's equations in this figure: the field strength tensor and its partial derivative are indeed capable of modelling Maxwell's equations. But only in certain ways, which are "picked out" specifically because they agree with nature. In other words, it's just ad hoc mathematically modelling; it's not a predictive theory. If you chisel a beautiful woman out of marble, all well and good; but you are a liar if you claim she was already in the marble waiting to be chiselled out. Your chisel work created the statue: it's not natural. Similar arguments apply to mathematical modelling in Maxwell's theory!

(On the subject of Einstein's relativism worship as an alternative to religion, see the earlier post linked here. While many liars still try to "defend" relativism by claiming falsely that proponents of quantum field theory are racists out to gas Jews, the sad fact is the precisely the opposite: Einstein tried to get a handful of Jews out of Germany, including Leopold Infeld, but his popular relativism helped Professor Cyril Joad attack Winston Churchill's call for an arms race with the Nazis in the early 1930s, making it politically unacceptable to the nation, and thus weakening the hand of the already weak-brained Prime Minister at the Munich watershed in September 1938. E.g., Joad was standing at the back of one of Churchill's popular lectures. Churchill made the point that we could deter Hitler by having an arms race. Joad then stood up and "innocently" asked Churchill "whether this advice was what he would tell the enemy", triggering cheers and applause and media criticism of Churchill. It is certainly true that if everything were relative with no absolute truth and no absolute distinction between good and evil, Churchill's advice is rubbish. This relativism, however, is not the case in morality, any more than in light velocity under a real FitzGerald-Lorentz contraction. Joad's popular deceit led to millions of unnecessary deaths, as Kahn proved in 1960. Joad's successors simply attacked Kahn while ignoring the facts, and then tried the same error of relativism during the Cold War with the Soviet Union. "The people suffering in the Soviet Union had a right to be free to be forced by the KGB to live under Soviet communism, just as we are free to have "a different system of government", you see! Relatively speaking, neither side was right, and it was just "playground politics" to have a Cold War instead of sensibly disarming to ensure peace and safety from the horrible risk of deterring invasions, you see!" After President Nixon's Watergate scandal and failure in Vietnam, to deflect media attacks from Nixon, America began to press ahead with negotiations with the Soviet Union for SALT treaties just when the Soviet threat was reaching parity with the Western arms stockpile, and when Soviet civil defense was being transferred from civilian control to military control with vastly increased spending. If the arms race had been stopped, the Soviet Union might have survived instead of going effectively bankrupt when Reagan manipulated oil prices in the 1980s. In 1975, America signed the Helsinki Act, for the first time agreeing to the borders of the Soviet Union and its Warsaw Pact in Europe. This officially handed over those countries and people to Soviet control. After it was signed, the Chairman of the Soviet KGB (secret police), Yuri Andropov, stated in a letter to the Soviet Central Committee on 29 December 1975: "It is impossible at present to cease criminal prosecutions of those individuals who speak out against the Soviet system, since this would lead to an increase in especially dangerous state crimes and anti-social phenomena." Einstein's "peaceful co-existence" propaganda was a falsehood. How on earth can anyone surrender to such lying relativist evil?)



Fig. 1e: clever field strength tensor in SO(3,3): Lunsford using 3+3d obtains the Pauli-Lubanski vector for particle spin, hence obtaining a quantum phenomenon from classical electrodynamics! The quantum number of particle spin is crucial to classical physics because, as we shall see, it determines how the phase amplitudes of paths with different actions vary. The quantum path with least action in the path integral has the classical equations of motion. The other paths are excluded due to spin-related phase amplitude cancellation. It's really that simple! Bosons with spin-1 are transformed by Dirac-Anderson pair-production into pairs of spin-1/2 fermions (the charged radiations in pair-production are trapped in loops by gravitation, thus giving the black hole event horizon cross-sectional area for quantum gravity interactions, which is confirmed by empirically-checked quantum gravity calculations, which allows the magnetic field of any portion of the loop to be cancelled by the magnetic field from the opposite side of the loop which has the opposite direction, allowing stable spin without self-inductance issues; this is shown in my 2003 Electronics World paper), so just as fermions combine at low temperatures into a Bose-Einstein condensate composed of Cooper pairs of electrons (or other fermions) that together behave like a frictionless, superconducting, low-viscosity boson, so too a spin-1 boson of radiation at any temperature is physically equivalent to a superposition of two spin-1/2 fermion-like components. (Higher temperatures cause random brownian motion with enough energy to break up the delicate Cooper pair spin-coupling, thus preventing superconductivity, etc.)

Fermion amplitudes during scatter subtract, while boson amplitudes add together with a positive sign, because of the superposition of the magnetic field self-induction vectors that are the consequence of spinning charges! (This rule applies to the scatter of similar particles in similar spin states with one another, not to unpolarized beams.) It is related to the Pauli exclusion principle, because Pauli stipulated that no two fermions with the same set of quantum numbers can exist in the same location; in a sense, therefore, the Pauli exclusion principle (only an empirically confirmed principle, not a mechanism or really deep explanation) causes fermions with originally similar sets of quantum numbers to change their states when they approach closely enough to interact. Bosons don't obey Pauli's exclusion principle, so they don't need to change their states when they scatter! This problem is discussed - but it's simple solution is ignored - by Feynman in the Lectures on Physics, v3, p.4-3:

"We apologise for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two [boson and fermion interaction amplitude sign rules] must necessarily go together, but we have not been able to find a simple way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply [for particles with identical spin states: fermion scattering amplitudes subtract in scatter, but boson scattering amplitudes add with a positive sign], but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics [QFT]. This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world."

(But don't be fooled. Just because Feynman said that, doesn't prove that peer-reviewers and journal editors are interested in the nurture and publication of deep-explanations to long-existing problems. Instead, the situation is the exact opposite. The longer an anomaly or "issue" has existed, the better the textbook authors learn to live with it, to camouflage it behind a wallpaper of obfuscating symbolism, and to reinterpret it as a badge of pride: "nobody understands quantum mechanics". This is spoken with the "nobody" snarled as a threat accompanied by a motion of the hand towards the bulging holster, after you have just explained the answer! Progress comes from change, which is violently opposed by bigots. Niccolò Machiavelli, The Prince (1513), Chapter 6: "And let it be noted that there is no more delicate matter to take in hand, nor more dangerous to conduct, nor more doubtful in its success, than to set up as the leader in the introduction of changes. For he who innovates will have for his enemies all those who are well off under the existing order of things, and only lukewarm supporters in those who might be better off under the new." The struggle for progress against the vested interests of the status quo is called politics, and the extension of politics against unreasonable opponents who won't really listen or actually try to block progress is, as Clausewitz defined it, war: "War is not merely a political act, but also a real political instrument, a continuation of political commerce, a carrying out of the same by other means.")

Danny Ross Lunsford's magnificent paper Gravitation and Electrodynamics over SO(3,3) overcame the hurdles required to unify gravitation and electrodynamics dynamically, making confirmed predictions (unlike the reducible gravitation-electrodynamics unification ideas of 4+1d Kaluza-Klein, Pauli, Einstein-Mayer, and Weyl; Pauli showed that "any generally covariant theory may be cast in Kaluza's form", hence the mindless and fruitless addition of 6/7 extra spatial dimensions in "not even wrong" string theory), but despite acceptance and publication in a peer-reviewed journal (International Journal of Theoretical Physics, Volume 43, Number 1, 161-177), and despite supplying the required arXiv endorsement, his brilliant paper was mindlessly removed from the stringy unification theory dominated arXiv (U.S. Government part-funded) pre-print server, thus denying its circulation via the accepted mainstream electronic route to physicists around the world. To summarize Lunsford's great idea is very easy. There is not one time dimension, but three, making a total of three spatial and three time dimensions. In other words, spacetime is symmetric, with one timelike dimension per spatial dimension.

One way to grasp this is to note that the age of the universe can be deduced (since the universe has been found to have a flat overall geometry, i.e. dark energy offsets the gravitational curvature on large scales), from looking at the redshift of the universe to obtain the Hubble parameter: the age of the universe is the reciprocal of that parameter. Since we build geometry on the basis of 90 degree angles between spatial dimensions, we have three orthagonal dimensions of space, SO(3). Measuring the Hubble constant in these 3 orthagonal dimensions by pointing a telescope in the three 90-degree different directions and measuring the redshift-distance Hubble parameter in each of them, would give 3 separate ages for the universe, i.e. 3 time dimensions! Obviously, if we happen to see isotropic redshift, all the 3 age measurements for the universe will be similar, and we will live under the delusion that there is only one time dimension, not three. But in reality, there may be a simple reason why the universe has an isotropic expansion rate in all directions, and thus why time appears to have only one discernable dimension: nature may be covering up two time dimensions by making all time dimensions appear similar to us observers. If this sounds esoteric, remember that unlike string theorists who compactify 6/7 unobservable extra spatial dimensions, creating a landscape of 10500 metastable vacua, Lunsford's SO(3,3) is the simplest possible and thus the best dynamical electromagnetic-gravitational unification according to Occam's razor. Lunsford proves that the the SO(3,3) unification of electrodynamics and gravitation eliminates the spurious "cosmological constant" from general relativity, so that the "dark energy" causing the acceleration must be spin-1 repulsive quantum gravity, just as we predicted in 1996 when predicting the small but later measured acceleration of the universe, a ~ Hc. (A prediction published via Electronics World, October 1996, p896, and also Science World ISSN 1367-6172, February 1997, after the paper had been rejected for "being inconsistent with superstring theory", an (as yet) "unconfirmed speculation", etc. (after confirmation, they just gave no reason for rejection when repeated submissions were made!) by the so-called "peer-reviewers" who censor predictive theories from publication for CQG, Nature, et al. Unfortunately, just like those mainstream bigots, IC - despite claiming to champion progress, and despite my efforts to write about his work which culminated in publications - has never in fifteen years agreed host a single discussion on his website of QFT, nor in his numerous scientific publications, but instead like the crank string theorists resorted to shouting the idea down and wasting time!)

Lunsford finishes his paper: "It thus appears that the indeterminate aspect of the Einstein equations represented by the ordinary cosmological constant, is an artifact [in general relativity, not in nature!] of the decoupling of gravity and electromagnetism. ... the Einstein-Maxwell equations are to be regarded as a first-order approximation to the full calibration-invariant system. One striking feature of these equations that distinguishes them from Einstein's equations is the absent gravitational constant - in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behaviour (see Weinberg, 1972 [S. Weinberg, Gravitation and Cosmology, Wiley, p. 7.1, p. 10.8, 1972])."


Fig. 1f: Oleg D. Jefimenko and Richard P. Feynman (equation 28.3 in the Feynman Lectures on Physics, vol. 1) independently solved Maxwell's equations in the early 1960s, which allows quantum field theory effects to be easily seen in the Maxwell correction to Coulomb’s force law for steady charges to an equation which allows for charge motion. The Jifimenko-Feynman equation for electric field strength is a three component equation in which the first component is from Coulomb’s law (Gauss’s field divergence equation in the Maxwell equations) where force F = qE so that electric field E = q/(4*Pi*Permittivity*R2) . The Feynman-Jefimenko solution to Maxwell's equations for field directions along the line of the motion and acceleration of a charge yields the simple summation of terms: Ev/m = [q/(4*Pi*Permittivity)] { R-2 + [v(cos z)/(cR2)] + [a(sin z)/(Rc2)] }

The sine and cosine factors in the two motion related terms are due to the fact that they depend on whether the motion of a charge is towards you or away from you (they come from vectors in the Feynman-Jefimenko solution; z is the angle between the direction of the motion of the charge and the direction of the observer). The first term in the curly brackets is the Coulomb law for static charges. The second term in the curly brackets with a linear dependence on v/c is simply the effect of the redshift (observer receding from the charge) or blue shift (observer approaching the charge) of the force field quanta, which depends on whether you are moving towards or away from the charge q; as the Casimir effect shows, field quanta or virtual photons do have physically-significant wavelengths. The third term in the curly brackets is the effect of accelerations of charge, i.e. the real (on-shell) photon radio wave emission: this radio emission field strength drops off inversely with distance rather than as the inverse square of distance. (The time-dependence of E at distance R in the equation is the retarded time t - R/c, which allows for the light speed delay due to the field being composed of electromagnetic field quanta and waves which must transverse that distance from charge to observer before the field can be observed.)

This solution to Maxwell's equations is important for the analysis of quantum field theory effects due to gauge bosons.

Physical mechanism of electric forces

Fig. 1a shows the Feynman diagrams used for the main force-causing interactions (there are many others too; for example the pions aren't the only mesons involved in the strong nuclear force that operates between nucleons).



Fig. 2: mathematical concepts like plots of electric and magnetic field strengths or even "field lines" inside photons are not physically real but they do constitute a useful tool, when mathematically shown on a graph, for establishing the physical distinctions and mechanisms for on-shell (real) and off-shell (virtual) radiations in quantum field theory, and it should be remembered that Maxwell's equations are an incomplete description of electromagnetism (the field potential A{mu} is needed to account for effects of the superimposed energy density in so-called "cancelled fields", e.g. the Aharonov-Bohm effect, where the superimposed field energy loads the vacuum and thus affects quantum phenomena, just as the "cancelled" negative and positive fields from electrons and nuclei in a block of glass load the vacuum with energy density and thus slow down light).

This diagram is a revision of one from my 2003 Electronics World article, the main updates being due to a continuing study of IC experimental work (which he interpreted sadly using an obsolete theory) on electromagnetic energy currents, and a forceful argument in an email from Guy Grantham, which stated that the only realistic way to make a simple exchange-radiation mechanism for both attraction and repulsion is to have electrically charged field quanta (although being he didn't help in actually working out the details shown above!). The whole point is that off-shell electrically massless field quanta can't propagate in the vacuum, due to magnetic self-inductance! Therefore, they will only propagate if there is an ongoing exchange in both directions such that the magnetic fields are cancelled out. This physical mechanism for transmission and cancellation is obviously at the root of the phase amplitude in quantum field theory, whereby spinning quanta can be supposed to take all possible routes through the vacuum, although the wildly varying phases at large actions cause the paths with large actions to cancel one another out, e.g. to be stopped by field effects like non-cancellation of magnetic self-inductance.



Fig. 3: physical basis of path integrals for the simple case of light reflection by a mirror. Classically the reflection law is that the angle of incidence equals the angle of reflection, which is of course the path that light travels in the least time or least "action" (action is defined as the integral of the lagrangian over time; for classical systems the lagrangian is the difference between the kinetic and potential energy of a particle at any given time). Light follows all paths, but most of them have randomly orientated "phases" and thus cancel out in the vector summation. Only for small actions do the phases add together coherently. Thus, light effectively occupies not a one dimensional line as it propagates, but is spread out spatially in space due to the reinforcement of all those paths with actions small compared to Planck's constant, h = E/f (which has units of action, and when divided by twice Pi, is equal to the proper unit of quantum action in quantum field theory). Hence Feynman's great statement: "Light ... uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)" – R. P. Feynman, QED (Penguin, 1990, page 54).



Fig. 4: a minor mathematical modification of Feynman's path integral theory involving replacing the imaginary (complex) phase amplitude with the real term in its expansion by Euler's equation, which is needed to overcome Dr Chris Oakley's mathematical problem with today's sloppy (mathematically non-rigorous) textbook quantum field theory, namely Haag's theorem, which proves that essential renormalization is impossible in a complex space like Foch space (an infinite-dimensional vector space) or Hilbert space (a complex inner product space, in which a complex number is associated to each pair of coordinate elements), because the isomorphism that maps the free-field Hilbert space on to the renormalized-field Hilbert space is ambiguous! (This theorem was proved by Hall and Wightman. The reason why the mainstream ignores Haag's theorem is that Haag postulated that the whole interaction picture doesn't exist, an interesting possibility which was investigated without great success by Dr Chris Oakley. Nobody seems to have grasped the obvious solution, namely that Hilbert space doesn't exist and the phase factor is mathematically fictitious and in the real world must lose it's complexity (this lack of sense is probably due to groupthink or mathematical respect to Euler, Hilbert, Schroedinger, Dirac, et al.; by analogy Newton should have resisted suggesting his laws of motion, purely out of respect for the dead genius Aristotle?). We must express the phase vectors as arrows in real space if we want quantum field theory to be renormalizable in a self-consistent, non-ambiguous manner. The path integral as shown above works just as well this way, it just eliminates the problem of Haag's theorem. (Haag's theorem is the argument behind Dr Oakley quotations from both Feynman and Dirac, who point out that because of renormalization, quantum field theory can't be proved to be self-consistent. As Feynman wrote in his 1985 classic, QED, the lack of proof of self-consistency due to Haag's theorem is embarrassing to any self-respecting mathematical physicist working in quantum field theory.) The diagram proves the equivalence of the resultant amplitudes when using eiS and cos S for the phase factor in the path integral (sum over path histories). Basically, what we are suggesting is that we take Euler’s eiS = cos S + i sin S then drop the complex term i sin S, which cuts out the use of the imaginary axis from the Argand diagram, giving only real space!



Fig. 5: how simply replacing the complex eiS phasor with its real component cos (iS) replaces complex space with real space, averting the inability to prove self-consistency in quantum field theory due to Haag's theorem. This allows the spatially distributed (truly transverse) on-shell and off-shell photons (unlike Maxwell's idea of the photon) shown in Fig. 2 to have a physically real phase factor to be modelled, with the phase denoting a real physical property of the photons taking different paths, e.g. the phase factor can denote differing angles of spin polarization or differing charge combinations, unlike the imaginary, unphysical phase factor. The reasons why this isn't done in textbooks is the fashionable groupthink argument that, historically, the origins of the textbook complex exponential phase factor are rooted in the solution to the time-dependent form of Schroedinger's equation, and the time-dependent form of Schroedinger's equation survives as Dirac's equation because Dirac's equation is only different from Schroedinger's in its Hamiltonian (i.e., the spacetime-compatible Dirac "spinor"). However, as Feynman explained in his Lectures on Physics, Schroedinger's equation was just a guess that "came out of the mind of Schroedinger"! It's not a physical fact, and it's actually contrary to physical facts because in quantum field theory it should take a discrete quantum interaction to cause a discrete wavefunction change, but Schroedinger's equation intrinsically assumes a classical, continuously varying wavefunction! The error here is obvious. Why defend a guesswork derivation error which prevents renormalized quantum field theory from being rigorously, unambiguously formulated mathematically and proved self-consistent? Dr Thomas Love has explained that all of the problems of wavefunction collapse in quantum mechanics originate from this guess by Schroedinger: "‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics."

Just as Bohr's atom is taught in school physics, most mainstream general physicists with training in quantum mechanics are still trapped in the use of the "anything goes" false (non-relativistic) 1927-originating "first quantization" for quantum mechanics (where anything is possible because motion is described by an uncertainty principle instead of a quantized field mechanism for chaos on small scales). The physically correct replacement is called "second quantization" or "quantum field theory", which was developed from 1929-48 by Dirac, Feynman and others.

The discoverer of the path integrals approach to quantum field theory, Nobel laureate Richard P. Feynman, has debunked the mainstream first-quantization uncertainty principle of quantum mechanics. Instead of anything being possible, the indeterminate electron motion in the atom is caused by second-quantization: the field quanta randomly interacting and deflecting the electron.

“... Bohr ... said: ‘... one could not talk about the trajectory of an electron in the atom, because it was something not observable.’ ... Bohr thought that I didn't know the uncertainty principle ... it didn't make me angry, it just made me realize that ... [ they ] ... didn't know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up ..."

- Richard P. Feynman, quoted in Jagdish Mehra's biography of Feynman, The Beat of a Different Drum, Oxford University Press, 1994, pp. 245-248. (Fortunately, Dyson didn't give up!)

‘I would like to put the uncertainty principle in its historical place: When the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [path amplitudes] for all the ways an event can happen – there is no need for an uncertainty principle!’

- Richard P. Feynman, QED, Penguin Books, London, 1990, pp. 55-56.

‘When we look at photons on a large scale – much larger than the distance required for one stopwatch turn [i.e., wavelength] – the phenomena that we see are very well approximated by rules such as “light travels in straight lines [without overlapping two nearby slits in a screen]“, because there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small (such as the tiny holes in the [double slit] screen), these rules fail – we discover that light doesn’t have to go in straight [narrow] lines, there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that [individual random field quanta exchanges become important because there isn't enough space involved for them to average out completely, so] there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference becomes very important, and we have to sum the arrows [in the path integral for individual field quanta interactions, instead of using the average which is the classical Coulomb field] to predict where an electron is likely to be.’

- Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 3, pp. 84-5.

His path integrals rebuild and reformulate quantum mechanics itself, getting rid of the Bohring ‘uncertainty principle’ and all the pseudoscientific baggage like ‘entanglement hype’ it brings with it:

‘This paper will describe what is essentially a third formulation of nonrelativistic quantum theory [Schroedinger's wave equation and Heisenberg's matrix mechanics being the first two attempts, which both generate nonsense 'interpretations']. This formulation was suggested by some of Dirac’s remarks concerning the relation of classical action to quantum mechanics. A probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of the particle at a particular time.

‘The formulation is mathematically equivalent to the more usual formulations. … there are problems for which the new point of view offers a distinct advantage. …’

- Richard P. Feynman, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, vol. 20 (1948), p. 367.

‘… I believe that path integrals would be a very worthwhile contribution to our understanding of quantum mechanics. Firstly, they provide a physically extremely appealing and intuitive way of viewing quantum mechanics: anyone who can understand Young’s double slit experiment in optics should be able to understand the underlying ideas behind path integrals. Secondly, the classical limit of quantum mechanics can be understood in a particularly clean way via path integrals. … for fixed h-bar, paths near the classical path will on average interfere constructively (small phase difference) whereas for random paths the interference will be on average destructive. … we conclude that if the problem is classical (action >> h-bar), the most important contribution to the path integral comes from the region around the path which extremizes the path integral. In other words, the article’s motion is governed by the principle that the action is stationary. This, of course, is none other than the Principle of Least Action from which the Euler-Lagrange equations of classical mechanics are derived.’

- Richard MacKenzie, Path Integral Methods and Applications, pp. 2-13.

‘… light doesn’t really travel only in a straight line; it “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of neighboring paths, the light scatters in many directions, no matter where you put the mirror.)’

- Richard P. Feynman, QED, Penguin Books, London, 1990, Chapter 2, p. 54.

There are other serious and well-known failures of first quantization aside from the nonrelativistic Hamiltonian time dependence:

“The quantum collapse [in the mainstream interpretation of first quantization quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” – Thomas Love, California State University.

“In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment.” – http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

First quantization for QM (e.g. Schroedinger) quantizes the product of position and momentum of an electron, rather than the Coulomb field which is treated classically. This leads to a mathematically useful approximation for bound states like atoms, which is physically false and inaccurate in detail (a bit like Ptolemy's epicycles, where all planets were assumed to orbit Earth in circles within circles). Feynman explains this in his 1985 book QED (he dismisses the uncertainty principle as complete model, in favour of path integrals) because indeterminancy is physically caused by virtual particle interactions from the quantized Coulomb field becoming important on small, subatomic scales! Second quantization (QFT) introduced by Dirac in 1929 and developed with Feynman’s path integrals in 1948, instead quantizes the field. Second quantization is physically the correct theory because all indeterminancy results from the random fluctuations in the interactions of discrete field quanta, and first quantization by Heisenberg and Schroedinger’s approaches is just a semi-classical, non-relativistic mathematical approximation useful for obtaining simple mathematical solutions for bound states like atoms:

‘You might wonder how such simple actions could produce such a complex world. It’s because phenomena we see in the world are the result of an enormous intertwining of tremendous numbers of photon exchanges and interferences.’

- Richard P. Feynman, QED, Penguin Books, London, 1990, p. 114.

‘Underneath so many of the phenomena we see every day are only three basic actions: one is described by the simple coupling number, j; the other two by functions P(A to B) and E(A to B) – both of which are closely related. That’s all there is to it, and from it all the rest of the laws of physics come.’

- Richard P. Feynman, QED, Penguin Books, London, 1990, p. 120.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Sound waves are composed of the group oscillations of large numbers of randomly colliding air molecules; despite the randomness of individual air molecule collisions, the average pressure variations from many molecules obey a simple wave equation and carry the wave energy. Likewise, although the actual motion of an atomic electron is random due to individual interactions with field quanta, the average location of the electron resulting from many random field quanta interactions is non-random and can be described by a simple wave equation such as Schroedinger’s.

This is fact, it isn’t my opinion or speculation: professor David Bohm in 1952 proved that “brownian motion” of an atomic electron will result in average positions described by a Schroedinger wave equation. Unfortunately, Bohm also introduced unnecessary “hidden variables” with an infinite field potential into his messy treatment, making it a needlessly complex, uncheckable representation, instead of simply accepting that the quantum field interations produce the “Brownian motion” of the electron as described by Feynman’s path integrals for simple random field quanta interactions with the electron.

Quantum tunnelling is possible because electromagnetic fields are not classical, but are mediated by field quanta randomly exchanged between charges. For large charges and/or long times, the number of field quanta exchanged is so large that the result is similar to a steady classical field. But for small charges and small times, such as the scattering of charges in high energy physics, there is some small probability that no or few field quanta will happen to be exchanged in the time available, so the charge will be able to penetrate through the classical "Coulomb barrier". If you quantize the Coulomb field, the electron's motion is indeterministic in the atom because it's randomly exchanging Coulomb field quanta which cause chaotic motion. This is second quantization as explained by Feynman in QED. This is not what is done in quantum mechanics, which is based on first quantization, i.e. treating the Coulomb field V classically, and falsely representing the chaotic motion of the electron by a wave-type equation. This is a physically false mathematical model since it omits the physical cause of the indeterminancy (although it gives convenient predictions, somewhat like Ptolemy's accurate epicycle based predictions of planetary positions):

Schroedinger error

Fig. 6: Schroedinger's equation, based on quantizing the momentum p in the classical Hamiltonian (the sum of kinetic and potential energy for the particle), H. This is an example of 'first quantization', which is inaccurate and is also used in Heisenberg's matrix mechanics. Correct quantization will instead quantize the Coulomb field potential energy, V, because the whole indeterminancy of the electron in the atom is physically caused by the chaos of the randomly timed individual interactions of the electron with the discrete Coulomb field quanta which bind the electron to orbit the nucleus, as Feynman proved (see quotations below). The triangular symbol is the divergence operator (simply the sum of the gradients in all applicable spatial dimensions, for whatever it operates on) which when squared becomes the laplacian operator (simply the sum of second-order derivatives in all applicable spatial dimensions, for whatever it operates on). We illustrate the Schroedinger equation in just one spatial dimension, x, above, since the terms for other spatial dimensions are identical.

Dirac's quantum field theory is needed because textbook quantum mechanics is simply wrong: the Schroedinger equation has a second-order dependence on spatial distance but only a first-order dependence on time. In the real world, time and space are found to be on an equal footing, hence spacetime. There are deeper errors in textbook quantum mechanics: it ignores the quantization of the electromagnetic field and instead treats it classically, when the field quanta are the whole distinction between classical and quantum mechanics (the random motion of the electron orbiting the nucleus in the atom is caused by discrete field quanta interactions, as proved by Feynman).

Dirac was the first to achieve a relativistic field equation to replace the non-relativistic quantum mechanics approximations (the Schroedinger wave equation and the Heisenberg momentum-distance matrix mechanics). Dirac also laid the groundwork for Feynman's path integrals in his 1933 paper "The Lagrangian in Quantum Mechanics" published in Physikalische Zeitschrift der Sowjetunion where he states:

"Quantum mechanics was built up on a foundation of analogy with the Hamiltonian theory of classical mechanics. This is because the classical notion of canonical coordinates and momenta was found to be one with a very simple quantum analogue ...

"Now there is an alternative formulation for classical dynamics, provided by the Lagrangian. ... The two formulations are, of course, closely related, but there are reasons for believing that the Lagrangian one is the more fundamental. ... the Lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; while the Hamiltonian method is essentially nonrelativistic in form ..."

Schroedinger’s time-dependent equation is: Hy= iħ.dy /dt, which has the exponential solution:

yt = yo exp[-iH(t – to)/ħ].

This equation is accurate, because the error in Schroedinger's equation comes only from the expression used for the Hamiltonian, H. This exponential law represents the time-dependent value of the wavefunction for any Hamiltonian and time. Squaring this wavefunction gives the amplitude or relative probability for a given Hamiltonian and time. Dirac took this amplitude e-iHT/ħ and derived the more fundamental lagrangian amplitude for action S, i.e. eiS/ħ. Feynman showed that summing this amplitude factor over all possible paths or interaction histories gave a result proportional to the total probability for a given interaction. This is the path integral.

Schroedinger's incorrect, non-relativistic hamiltonian before quantization (ignoring the inclusion of the Coulomb field potential energy, V, which is an added term) is: H = ½ p2/m. Quantization is done using the substitution for momentum, p -> -iħ{divergence operator} as in Fig. 6 above. The Coulomb field potential energy, V, remains classical in Schroedinger's equation, instead of being quantized as it should.

The bogus ‘special relativity’ prediction to correct the expectation H = ½ p2/m is simply: H = [(mc2)2 + p2c2]2, but that was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ2d2y/dt2 = [(mc2)2 + p2c2]y.

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction. Dirac’s equation simply makes the time-dependent Schroedinger equation (Hy = iħ.dy/dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H = apc + b mc2,

where p is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’. This is not to be confused for the Weyl spinors used in the gauge theories of the Standard Model; whereas the Dirac spinor represents massive spin-1/2 particles, the Dirac equation yields two Weyl equations for massless particles, each with a 2-component Weyl spinor (representing left- and right-handed spin or helicity eigenstates). The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E = ±[(mc2)2 + p2c2]1/2.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ½ ħ = ± h/(4p). This explains two of the four solutions! The electron is spin-1/2 so it has a spin of only half the amount of a spin-1 particle, which means that the electron must rotate 720 degrees (not 360 degrees!) to undergo one revolution, like a Mobius strip (a strip of paper with a twist before the ends are glued together, so that there is only one surface and you can draw a continuous line around that surface which is twice the length of the strip, i.e. you need 720 degrees turning to return it to the beginning!). Since the spin rate of the electron generates its intrinsic magnetic moment, it affects the magnetic moment of the electron. Zee gives a concise derivation of the fact that the Dirac equation implies that ‘a unit of spin angular momentum interacts with a magnetic field twice as much as a unit of orbital angular momentum’, a fact discovered by Dirac the day after he found his equation (see: A. Zee, Quantum Field Theory in a Nutshell, Princeton University press, 2003, pp. 177-8.) The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc2. This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc2. The fact that in fact E = ± mc2, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity. E = ± mc2 allowed Dirac to predict antimatter, such as the anti-electron called the positron, which was later discovered by Anderson in 1932 (anti-matter is naturally produced all the time when suitably high-energy gamma radiation hits heavy nuclei, causing pair production, i.e., the creation of a particle and an anti-particle such as an electron and a positron).



(To be continued when time allows. In the meanwhile, as linked on an earlier post, the introductory pages from my draft PDF paper can be found at http://nige.files.wordpress.com/2010/10/paper-draft-pages-1-5-2-oct-2010.pdf, although please note that there are some trivial mathematical symbol typos that are outside my control, e.g. the QuarkXpress software I used doesn't contain any apparent way of writing Psi with an overbar, so I've had to underline Psi instead. I also gave some comments about errors in "electroweak symmetry" on Tommaso's blog which are of relevance, posts on this blog discuss particle masses and the quantum gravity mechanism.)





Above: a quantitative prediction of the cosmological acceleration of the universe in 1996, two years ahead of the discovery, was ignored! Pseudo-physicists at the so-called Classical and Quantum Gravity and also Physics Review Letters think anything fundamental that doesn't agree with superstring liars must be wrong! Maybe the gravitons heat up or slow down planets? If so this should apply also to the well established off-shell Casimir radiation in the vacuum which would have dragged and slow the planets making them glow, slow down, and spiral into the sun millions of years ago. They didn't. Contrary to string theorists who are ignorant of the basics of quantum field theory, field quanta are off-shell particles, which impart kinetic energy to accelerate charges thus causing forces, without causing direct heating or drag, merely the Lorentz mass increase and the real FitzGerald-Lorentz contraction effect. Maybe rank-2 tensors prove spin-2 gravitons? Nope, rank-1 tensors are first order field line gradients, and rank-2 tensors are second-order equations of motion. You can use either rank-1 or rank-2 equations for electromagnetism or gravity; it depends not on spin but purely on whether the theory is formulated as field lines (rank-1 equations) or accelerations in spacetime (rank-2).

Update (20 January 2011):

Sadly, superstring theorist Dr Lubos Motl, a Facebook friend who is 100% right about global warming hype, left-wing dangers and political correctness, has called for the famous superstring theorist Professor Greene at Columbia University to fire superstring critic Dr Peter Woit. Dr Woit, whose blog and paper on representation theory and quantum field theory since 2002 has led me to my current approach to the problem of fundamental interactions and unification, has replied robustly: "It seems that some unemployed guy in Pilsen who reads this blog thinks Brian Greene is my employer and is upset that Brian is not having me fired. For the record, my position as “Senior Lecturer” in the math department is not tenured, but I have a long-term contract and whether it gets renewed at some point in the distant future will have nothing to do with what Brian thinks about this blog, or with what I think about his books. Actually, my impression is that if most string theorists could choose one well-known blog dealing with string theory to shut down, it wouldn’t be this one ..."

Elsewhere, Dr Woit writes: “The controversy over the multiverse is … the idea that string theory implies a multitude of completely separate universes with different physical laws. This is quite different than many-worlds, which is an interpretation of standard quantum mechanics, with one fixed set of physical laws.”

Dr Peter Woit, “Is the Multiverse Immoral?”: “One of the lessons of superstring theory unification is if that a wrong idea is promoted for enough years, it gets into the textbooks and becomes part of the conventional wisdom about how the world works. This process is now well underway with multiverse pseudo-science, as some theorists who should know better choose to heavily promote it, and others abdicate their responsibility to fight pseudo-science as it gains traction in their field.”

Friend says (January 29, 2011 at 8:00 pm): “I think that multiverses are a misinterpretation of the Path Integral used in QFT, etc. Instead of it predicting the actual existence of alternative paths/universes, it really predicts that it takes ALL possibilities to make just one universe. Thus it is impossible for multiverses to exist.”

{NC note: the “Friend” who wrote this comment, which goes on to another paragraph of abject speculation, is not me, although I have contributed comments under anonymity where I can’t otherwise contribute comments. The probability that “Friend” is Dr Woit writing an anonymous comment on his own blog, or a friend of his doing so, is not 0. However I don’t really know what Dr Woit thinks about Feynman’s 1985 book QED. My wild guess from reading Dr Woit’s 2002 arXiv paper on “Quantum Field Theory and Representation Theory” is that he hasn’t really spent time on Feynman’s 1985 book, doesn’t physically put too much stress on the “heuristic” picture of 2nd quantization/QFT as virtual particles following every path and interfering to cause “wavefunction” chaos; he works in a mathematics department and is fixed into a belief that sophisticated maths is good, only objecting to misrepresentations of mathematics for hype and funding by superstring theorists and others. “Friend”, in a later comment time-stamped 9:29pm, writes about another pet interest of Dr Woit’s: “how about the financial market;-) Unsatisfied with economic progress, they’ve invented extravagant financial theories of prime-lending rates and complicated security instruments. Funny, I’ve heard that some physicists have found work in the financial industry. Perhaps their theories work in some other universe.” Dr Baez replied to Friend: ‘That’s a nice analogy because it seems to have been caused by a desperate search for “high rates of return”.’}

John Baez says (January 29, 2011 at 8:45 pm): “Maybe a branch of science is ripe for infection by pseudoscience whenever it stops making enough progress to satisfy the people in that field: as a substitute for real progress, they’ll be tempted to turn to fake progress. One could expect this tendency to be proportional to the loftiness of the goals the field has set for itself… and to the difficulty its practitioners have in switching to nearby fields that are making more progress. But is this really true? …”

Thomas Larsson says (January 31, 2011 at 12:12 pm): "Medieval astronomers knew that the universe is a mechanical clockwork with at least 13 epicycles. The point is that Nature’s answers depend on how the question is posed. If you ask her about epicycles, she will answer with epicycles, even if that has little to do with the correct dynamics. And if you ask her about dark matter and dark energy, she will answer in terms of dark matter and energy. Perhaps this is the right framework. But perhaps it is not." Update:
The quantum gravity lagrangian by

Einstein's rank-2 tensor compression of Maxwell's equations does not turn them into rank-2 spacetime curvat... by

0 Comments:

Post a Comment

<< Home

All of this data should have been published to inform public debate on the basis for credible nuclear deterrence of war and civil defense, PREVENTING MILLIONS OF DEATHS SINCE WWII, instead of dDELIBERATELY allowing enemy anti-nuclear and anti-civil defence lying propaganda from Russian supporting evil fascists to fill the public data vacuum, killing millions by allowing civil defence and war deterrence to be dismissed by ignorant "politicians" in the West, so that wars triggered by invasions with mass civilian casualties continue today for no purpose other than to promote terrorist agendas of hate and evil arrogance and lying for war, falsely labelled "arms control and disarmament for peace": "Controlling escalation is really an exercise in deterrence, which means providing effective disincentives to unwanted enemy actions. Contrary to widely endorsed opinion, the use or threat of nuclear weapons in tactical operations seems at least as likely to check [as Hiroshima and Nagasaki] as to promote the expansion of hostilities [providing we're not in a situation of Russian biased arms control and disarmament whereby we've no tactical weapons while the enemy has over 2000 neutron bombs thanks to "peace" propaganda from Russian thugs]." - Bernard Brodie, pvi of Escalation and the nuclear option, RAND Corp memo RM-5444-PR, June 1965.

Update (19 January 2024): Jane Corbin of BBC TV is continuing to publish ill-informed nuclear weapons capabilities nonsense debunked here since 2006 (a summary of some key evidence is linked here), e.g. her 9pm 18 Jan 2024 CND biased propaganda showpiece Nuclear Armageddon: How Close Are We? https://www.bbc.co.uk/iplayer/episode/m001vgq5/nuclear-armageddon-how-close-are-we which claims - from the standpoint of 1980s Greenham Common anti-American CND propaganda - that the world would be safer without nuclear weapons, despite the 1914-18 and 1939-45 trifles that she doesn't even bother to mention, which were only ended with nuclear deterrence. Moreover, she doesn't mention the BBC's Feb 1927 WMD exaggerating broadcast by Noel-Baker which used the false claim that there is no defence against mass destruction by gas bombs to argue for UK disarmament, something that later won him a Nobel Peace Prize and helped ensure the UK had no deterrent against the Nazis until too late to set off WWII (Nobel peace prizes were also awarded to others for lying, too, for instance Norman Angell whose pre-WWI book The Great Illusion helped ensure Britain's 1914 Liberal party Cabinet procrastinated on deciding what to do if Belgium was invaded, and thus failed deter the Kaiser from triggering the First World War!). The whole basis of her show was to edit out any realism whatsoever regarding the topic which is the title of her programme! No surprise there, then. Los Alamos, Livermore and Sandia are currently designing the W93 nuclear warhead for SLBM's to replace the older W76 and W88, and what she should do next time is to address the key issue of what that design should be to deter dictators without risking escalation via collateral damage: "To enhance the flexibility and responsiveness of our nuclear forces as directed in the 2018 NPR, we will pursue two supplemental capabilities to existing U.S. nuclear forces: a low-yield SLBM warhead (W76-2) capability and a modern nuclear sea launched cruise missile (SLCM-N) to address regional deterrence challenges that have resulted from increasing Russian and Chinese nuclear capabilities. These supplemental capabilities are necessary to correct any misperception an adversary can escalate their way to victory, and ensure our ability to provide a strategic deterrent. Russia’s increased reliance on non-treaty accountable strategic and theater nuclear weapons and evolving doctrine of limited first-use in a regional conflict, give evidence of the increased possibility of Russia’s employment of nuclear weapons. ... The NNSA took efforts in 2019 to address a gap identified in the 2018 NPR by converting a small number of W76-1s into the W76-2 low-yield variant. ... In 2019, our weapon modernization programs saw a setback when reliability issues emerged with commercial off-the-shelf non-nuclear components intended for the W88 Alteration 370 program and the B61-12 LEP. ... Finally, another just-in-time program is the W80-4 LEP, which remains in synchronized development with the LRSO delivery system. ... The Nuclear Weapons Council has established a requirement for the W93 ... If deterrence fails, our combat-ready force is prepared now to deliver a decisive response anywhere on the globe ..." - Testimony of Commander Charles Richard, US Strategic Command, to the Senate Committee on Armed Services, 13 Feb 2020. This issue of how to use nuclear weapons safely to deter major provocations that escalate to horrific wars is surely is the key issue humanity should be concerned with, not the CND time-machine of returning to a non-nuclear 1914 or 1939! Corbin doesn't address it; she uses debunked old propaganda tactics to avoid the real issues and the key facts.

For example, Corbin quotes only half a sentence by Kennedy in his TV speech of 22 October 1962: "it shall be the policy of this nation to regard any nuclear missile launched from Cuba against any nation in the Western hemisphere as an attack by the Soviet Union on the United States", and omits the second half of the sentence, which concludes: "requiring a full retalitory response upon the Soviet Union." Kennedy was clearly using US nuclear superiority in 1962 to deter Khrushchev from allowing the Castro regime to start any nuclear war with America! By chopping up Kennedy's sentence, Corbin juggles the true facts of history to meet the CND agenda of "disarm or be annihilated." Another trick is her decision to uncritically interview CND biased anti-civil defense fanatics like the man (Professor Freedman) who got Bill Massey of the Sunday Express to water down my article debunking pro-war CND type "anti-nuclear" propaganda lies on civil defense in 1995! Massey reported to me that Freedman claimed civil defense is no use against a H-bomb, which he claims is cheaper than dirt cheap shelters, exactly what Freedman wrote in his deceptive letter published in the 26 March 1980 Times newspaper: "for far less expenditure the enemy could make a mockery of all this by increasing the number of attacking weapons", which completely ignores the Russian dual-use concept of simply adding blast doors to metro tubes and underground car parks, etc. In any case, civil defense makes deterrence credible as even the most hard left wingers like Duncan Campbell acknowledged on page 5 of War Plan UK (Paladin Books, London, 1983): "Civil defence ... is a means, if need be, of putting that deterrence policy, for those who believe in it, into practical effect."