A Ship of Fools


The extent to which human beings are prepared to sacrifice the good of the environment, of the planet, of other human beings across the globe, of all that future generations might hold precious, to their own short-term economic or political ends has never ceased to stagger and depress me. Some might say indeed that it shows us up for what we are: scarcely evolved ground apes no more developed than the South Indian monkeys that, in Pirsig’s Zen and the Art of Motorcycle Maintenance, famously allow themselves to be trapped by their own greed and rigidity when they could secure their long-term future merely by giving up their immediate gratification. Far from being images of a divine creator and endowed with a perfect immortal soul, we are in fact primates who have only recently descended from the trees and due to certain evolutionary advantages now get to manage an entire planet, the long term prospects for which are looking less attractive every day.

One can of course see this everywhere, and you don’t have to be a tree-hugging hippy to be aware of and deplore what is happening. We know for a fact, the deniers aside, that economic growth across the planet is having the effect of releasing huge amounts of carbon into the atmosphere with profound consequences for global temperatures. We know for a fact that humanity’s dysfunctional relationship with virtually all the ecosystems that surround it is pushing the rate of species extinction far above what it would otherwise be, that it is in fact propelling us even as I write into a major extinction event, the sixth to which the Earth has been subject since the emergence of life here. We know that nuclear tests and nuclear weapon production has resulted in the emission of thousands of tons of radioactive waste that has turned whole areas of the planet effectively into no-go areas; silent cities or lakes foul with invisible poison, where to linger for even an hour is to invite an extended and excruciating death. And for what? A rise in GNP here, a temporary political or military advantage there. As the writer E. O Wilson said, “Destroying rainforest for economic gain is like burning a Renaissance painting to cook a meal.”

It was the latter issue, that of our nuclear past, that struck me today when I learned that it was the 59th anniversary of the Kyshtym Disaster, an event seemingly little known in the west and about which I have to confess I knew next to nothing.

The activities of the Soviet Union were in general of course catastrophic for the environment; the Workers’ Paradise showed precious little concern for the health and safety of the workers and their families in their struggle to promote economic growth and military superiority. Because of the culture of secrecy surrounding all aspects of government, it was impossible to get a detailed picture of the damage done to the environment until after the Soviet Union came to an end. A correspondent for the Washington Post who visited the East German town of Bitterfeld (described by Der Spiegel as the dirtiest town in Europe) in 1990 wrote:

Here, rivers flow red from steel mill waste, drinking water contains many times the European Community standards for heavy metals and other pollutants, and the air has killed so many trees — 75 percent in the Bitterfeld area — that even the most ambitious clean-up efforts now being planned would not reverse the damage. East Germany fills the air with sulphur dioxide at almost five times the West German rate and more than twice the Polish rate, according to a recent study. One chemical plant near here dumps 44 pounds of mercury into the Saale river each day — 10 times as much as the West German chemical company BASF pumps into the Rhine each year.

Across East Germany as a whole an estimated 42 percent of rivers and 24 percent of lakes were so polluted that they could not be used to process drinking water, almost half of the country’s lakes were considered dead or dying and unable to sustain fish or other forms of life, and some 44 percent of East German forests were damaged by acid rain. In some areas of East Germany the level of air pollution was between eight and twelve times greater than that found in West Germany, and 40 percent of East Germany’s population lived in conditions that would have justified a smog warning across the border. Only one power station in the whole had been fitted with the necessary equipment to clean sulphur from emissions.

Conditions in Russia were if anything worse still. A study published by the US Library of Congress’s Federal Research Division in 1996 described the country’s air as “among the most polluted in the world” (“According to one estimate, only 15 percent of the urban population breathes air that is not harmful”), and found that “75 percent of Russia’s surface water is now polluted, 50 percent of all water is not potable according to quality standards established in 1992, and an estimated 30 percent of groundwater available for use is highly polluted”. In summary,

In the 1990s, after decades of such practices, the government categorized about 40 percent of Russia’s territory (an area about three-quarters as large as the United States) as under high or moderately high ecological stress. Excluding areas of radiation contamination [my italics], fifty-six areas have been identified as environmentally degraded regions, ranging from full-fledged ecological disaster areas to moderately polluted areas.

Given the Soviet government’s cavalier attitude towards environmental protection, standards of safety in the nuclear industry were never likely to be very encouraging, and in fact the pressure of historical circumstances meant that they were in effect non-existent.

The Soviet nuclear industry got underway in the late 1940’s at a time when the country suddenly found itself in a frantic race against the USA. The Americans had of course wrong-footed their erstwhile Russian allies by detonating nuclear devices at Hiroshima and Nagasaki, starkly demonstrating that, at least in the short-term, the USA was the world’s primary military power with the the ability to wreak terrible destruction upon any enemy. Given the ambitions of the Stalin regime, as well as its instinctive paranoia, this state of affairs could not be allowed to continue, and the government resolved to catch up with its rival in the nuclear arms race as quickly as possible. For this reason, a number of plants were hastily constructed in order to produce the required amounts of weapons-grade uranium and plutonium. The largest plant was at a place called Mayak, not far from Chelyabinsk in the Southern Urals.

The haste with which it was built, and the considerable gaps in the knowledge of the Soviet physicists about what was still a nascent technology whose recent wartime development was shrouded in secrecy, boded ill for the safety of the workers and the local environment. From the start, no consideration was paid to the responsible disposal of the tons of contaminated waste that would be produced. The plant’s six reactors were all on lake Kyzyltash and used what was apparently quite a primitive open cycle cooling system, sucking in thousands of gallons of water from the lake daily to cool the reactor, then discharging the contaminated water straight back into the lake, which rapidly became itself heavily contaminated. Another much smaller lake, Lake Karachay, became a dumping ground for huge quantities of waste of which the lethal levels of radioactivity made it too dangerous to store in the plant’s underground storage vats.


Dumping waste at Lake Karachay, the most contaminated place on Earth

Lake Karachay became the most polluted spot on Earth, surely the most baleful claim to fame that any place can have. The lake over the years accumulated some 4.44 exabecquerels (EBq) of radioactivity across less than 1 square mile of water – for the sake of comparison, the Chernobyl disaster of 1986 released some 5-12 EBq of radioactivity over many thousands of square miles. The sediment of the lake bed is estimated to be composed almost entirely of high level radioactive waste deposits to a depth of some 11 feet. Radioactive waste is still dumped in the vicinity, carried there in trucks whose drivers keep a Geiger Counter in the cab and make sure they deposit their load and turn around as quickly as possible to minimise their exposure – as little as half an hour’s exposure here can deliver a lethal dose of radiation. In the 1960’s the lake began to dry up, the radioactive dust being picked up by the wind and spread across a wide area, irradiating possibly hundreds of thousands of people. To prevent the waste in the sediment being released, the authorities began to fill it in with concrete and it would seem that this process has now almost completed. A sealed, lethal, concrete hole that used to be a lake. It would be hard to think of a more grotesque abuse of our natural environment.

The disaster of 1957 occurred because the cooling system for the underground storage tanks that had been installed in 1953 was ineffective and not well maintained. With no cooling to counteract the intense heat generated by radioactive decay, the temperature of the waste in one of the tanks rose to dangerous levels. The result was a massive chemical explosion that was estimated to have a force of 70-100 tons of TNT and was powerful enough to throw the 160-ton concrete lid of the storage tank into the air. The waste from the tank was dispersed across an area of several hundred square kilometres and continued to spread north eastwards, blown by the wind, during the next few days.

The precise number of fatalities resulting from the disaster are unknown and unknowable. Because of the secrecy surrounding the plant, none of the local population were made aware of it, and the evacuation of local towns did not even begin for a week after the explosion. Some communities were not evacuated for more than a year, if at all. Certainly any figures originating from Soviet sources would have been worthless in any case. Because the existence of the plant was such a secret, instances of radiation sickness could not even be reported as such – doctors had to use the expression ‘special disease’. But by the time the Mayak plant’s existence was officially acknowledged, in the 1990’s, it was possible to point to extremely high levels of cancer, leukemia, birth defects, across the whole of the region.

What we do know is that the explosion dumped some 76 million cubic metres of radioactive water into the Techa River system, which provided 24 towns and villages with their major source of water. Up to 65% of the population that lived along the river may have become irradiated as a result. We know also that ultimately the plume of radiation produced a large area of permanently heavy contamination known as the East Ural Radioactive Trace (EURT), which the Soviets euphemistically relabelled the ‘East Ural Nature Reserve’ in 1968 and where many still have to live with the consequences of 1957 even now; and we know that, although western authorities were not fully aware of the event until the 1970’s, the International Atomic Energy Agency subsequently rated it a Level 6 disaster on the International Nuclear Event Scale – only the disasters at Chernobyl in 1986 and Fukushima in 2011 have been rated higher (both 7).

It is scarcely to be wondered at that the Soviet authorities were prepared to let their own people die hideous deaths rather than let considerations of humanity complicate their pursuance of military power. The humanitarian credentials of the regime had been made abundantly clear during the 1930’s and the Second World War. But lest it be thought that their western rivals were much more scrupulous, one must be aware that the CIA were almost certainly aware of the Kyshtym Disaster from 1959 at the latest, but were happy to keep it a secret in order to avoid embarrassing the US Nuclear industry, which might find itself unduly hampered by public concerns about similar disasters at US plants.

It almost reads like The Lorax, a needless, careless, ugly poisoning of earth, air and water not in this case in order to manufacture the ever-profitable thneed but to maintain great power status, for the ends of an elite political class unable to see anything beyond their own immediate desires. Sometimes one cannot help but think of the words attributed to the Cree Indians, “only when the last tree has died and the last river been poisoned and the last fish been caught will we realise we cannot eat money”. Given our form as a species, it seems unlikely we will realise our errors before we have run out of Renaissance paintings to burn.

When Revolutions Are Glorious

It’s odd that we Britons on the whole know so little about an event that has gone down in our history as The Glorious Revolution. While even the least historically aware of us is likely to know that we had a civil war once, that King Charles got his head cut off and Oliver Cromwell pretty much ruled over us for a while before the monarchy was restored, I suspect not one in ten would be able to tell you what were the effects of the subsequent Glorious Revolution of 1688-9, or even who were the outgoing and incoming kings.


James II and William III, the substitution made by the Glorious Revolution of 1688-9

This is curious because it is arguable that the effects of the Glorious Revolution were more marked and more lasting than those of the Civil War that it followed, and in some ways shaped the formation of the modern British state. The fact that it has been dubbed ‘Glorious’ also, to me, says a great deal about our attitude towards political revolutions, and how easily the more conservative among us (with a small ‘c’ of course), who are in most circumstances the last to approve of revolutionary movements, are able to rationalise their support for a revolution as long as it is a socially conservative one that leaves basic economic and social relationships untouched.

It is odd to think that, dramatic as the Civil Wars of the 1640’s and their consequences were, they settled surprisingly little. The Stuart monarchy was restored in 1660 in the person of Charles II, son of the executed Charles I; the king continued, as his father had done, to regard the person of the monarch as the basis of sovereignty, and when Parliament did sit for an extended time it was the famously pliable ‘Cavalier Parliament’ which normally tried to avoid undue conflict with the King. The new king was keen to settle old scores, and the regicides of Charles I who were still alive in 1660 were mostly executed or forced to flee for their lives. Puritanism, which had been such a driving force behind the opposition to Charles I, saw its influence disappear overnight, and Puritans became ‘dissenters’, Protestants who had excluded themselves from the Church of England and as a result became subject to penal laws as stringent as those applying to Catholics.


The Restoration; Charles II lands at Dover, 1660

What made further regime change desirable for some after the Restoration was the fact that Charles II was never able to produce an heir, so after his death the throne was likely to – and in fact did – go to his younger brother James; and James was, what no English monarch had been since Mary more than a century earlier, a Catholic. For this reason there was always a relatively small but vocal group within the country who wanted James excluded from the succession on the grounds that a Catholic monarch was likely to threaten the Anglican establishment as it had been settled in the time of Elizabeth. This group, made up largely but by no means entirely of dissenters, came to be known as Whigs, whereas their opponents, who favoured the Jacobite succession even in the person of a Catholic, became known as Tories.

Not that the Tories were any more enthusiastic than their Whig opponents about Catholicism and the monarchical absolutism it was held to represent; although the famous description of the Church of England as ‘the Tory Party at prayer’ belongs to a later time, we can see the inception of the pairing as early as the seventeenth century. It was just that the Tories were more sanguine about the magnitude of the threat; the Church of England and the institution of Parliament were, they thought, now so firmly entrenched by law and custom that even a Catholic monarch could pose no great threat, especially since he would have to swear a Coronation Oath agreeing to respect those institutions.

There was a further key fact also: James, like his brother, had no male heir; he had two daughters, Mary, who was married to the Calvinist Dutch stadtholder William of Orange, and Anne, both of whom had been raised as Protestants. A temporary Catholic monarchy under James, therefore, was seen as endurable, as it would immediately be followed by a return to Protestantism under whichever of his daughters succeeded him.

So, when Charles II died in 1685, his brother James II succeeded with a remarkable absence of opposition. He took the required oaths and promised to respect and support the Protestant establishment, and that was enough for most. There were declarations of support from all over the country, and the Anglican clergy preached from their pulpits that it was sinful to oppose a King ordained by God. When Charles II’s illegitimate Protestant son James, Duke of Monmouth, rebelled in the summer of 1685 there was very little support beyond the most committed Whigs, and he suffered ignominious defeat at the Battle of Sedgemoor (one of the contenders for the last pitched battle on English soil) and subsequent execution for treason. King James II and his Tory ministers were quick to use the rebellion as an excuse to crack down on the Whigs and dissenters, weakening opposition to the crown still further.


The execution of the Duke of Monmouth, 1685

It was not long, however, before James’ measures began to arouse concern and opposition among his subjects. The Sedgemoor Rebellion was used as an excuse to build a large standing army that many, both Whig and Tory, feared would be used to foist Catholic absolutism on the kingdom. James very quickly began replacing office holders with Catholic favourites, and using his dispensing power to allow Catholics to command regiments in his army without taking the oaths required by the Test Act; when the previously supportive (and overwhelmingly Tory) Parliament of 1685 expressed their concerns, he dissolved it, and never called another one during his reign.

In the space of three years James succeeded in completely eroding the huge support he had inherited upon his succession, and by a number of his acts seemed to be behaving in a wilfully provocative manner towards his Protestant subjects. The High Anglican traditions of the University of Oxford were outraged when James forced the fellows of Christ Church and University Colleges to accept Catholic colleagues; The fellows of Magdalen College were ultimately bullied into electing a Catholic as Master of the college in direct violation of their collegiate oaths and of their rights, set out in the college charter, to elect a Master of their own choosing. It was felt that the Anglican monopoly of education was now coming under threat of Popery.

He suffered a public relations disaster in the spring of 1688 after issuing the Declaration of Indulgence, in which he used his dispensing power to negate the effect of the Test Act, and then demanding that all of the Anglican clergy read out the Declaration from their pulpits. The vast majority of them refused to do so. When seven Anglican bishops (among them the Archbishop of Canterbury) submitted a petition most humbly requesting that the king reconsider, he had them arrested and tried for seditious libel. It was a huge error; despite a judiciary purged of perceived opponents, the bishops were very publicly acquitted, to massive popular acclaim. Lawyers working for the prosecution were forced to flee the court in disguise to escape the wrath of the mob, and crowds all over the country lit bonfires and burned the pope in effigy.


The Trial of the Seven Bishops

Ultimately James’ object was repeal of the Test Act that prevented Catholics from holding high political office, and when he prepared to call a Parliament in 1688 packed with pliant Tories who would do his bidding the stage was set for a confrontation. Then came the single event that made a revolution all but inevitable: at the end of 1688 the queen fell pregnant, and in June 1688 gave birth to James’ Catholic son and heir, James Francis Edward (known to history as The Old Pretender). No longer could the reign of a Catholic monarch be stoically endured in the expectation of Protestant successors; now there was the prospect of a Catholic dynasty, and horrified Anglicans began to see Bourbon France, Catholic and despotic, as a road map for where their own monarch was taking them.

Hence what we now call the Glorious Revolution. A group of English nobles invited the intervention of William of Orange to save the Protestant religion. William duly crossed the Channel (he had originally meant to sail up the Eastern coast of England and land in the North-East but a ‘Protestant wind’ instead blew his fleet south west, so that he was able to evade the English fleet and land his army successfully in the west country); James panicked and instead of giving battle with his larger, but largely disaffected, army, he fled the country to take refuge in France with Louis XIV. William’s march on London became a parade as James’ remaining support evaporated.

And yet Williams’ succession was by no means guaranteed. Many still saw James as the lawful king. The English Parliament – itself of dubious legal status since it required a King to call a Parliament – quickly came to the very convenient conclusion that James had abdicated the throne and ‘unkinged’ himself by his flight to France but even so, he had a lawful heir in the little Prince James, whose rights to the throne could not be abrogated by any action of his father. It was therefore necessary also to cast doubt upon his parentage, and the story quickly became accepted that he was an impostor, a baby who had been smuggled into the bed of the barren queen in a warming pan. The legal and moral hoops through which the defenders of the Anglican establishment were prepared to jump in order to put themselves on the side of right were impressive; thus, those who stated fairly unequivocally that James’ authority was absolute in 1685 were able to argue three years later that that absolutism transgressed natural law when certain ‘contracts’ with the subjects had been broken. Distinction began to be made between things that were malum in se, i.e. wrong or evil in themselves, as opposed to malum prohibitum, or wrong only because they were prohibited, with the crucial difference being that a monarch’s dispensing power could be said to apply to the latter but never to the former.

It would be unfair to say that the events of 1688-9 were not revolutionary; as stated above, in many ways they created the British polity in which we now live, to a much greater extent than the Civil Wars and the Commonwealth of the 1640’s and 1650’s. Tim Harris, in his engaging book ‘Revolution; the Great Crisis of the British Monarchy 1688-1720’ poses a fascinating thought experiment: suppose we imagine a man, perhaps of mild Whiggish tendencies, who died in the 1630’s and was somehow resurrected in 1686-7. The situation at his death and at his resurrection would be little different; still a Stuart monarch whose sovereignty was regarded as essentially personal, still a monarch determined to rule without Parliament as far as possible. Any differences in fact serve to emphasize the increased power of the crown. Provocatively using its dispensing power with a confidence unthinkable in the 1630’s, and with the active backing both of a large standing army and a friendly Catholic neighbour across the Channel. It would have been hard for our resurrected man to believe that Parliament had fought a victorious and regicidal war against the king in the interim.

Now imagine a man dying in 1686 and re-awakening in the 1720’s. Gone is the personal sovereignty of the monarch; now the sovereignty is that of the king-in-parliament, and Parliament, by means of the Bill Of Rights of 1689 and the Act Of Settlement of 1701, had both enshrined the limits to monarchical power in law and had also established its power to determine where the succession lay. Gone is the standing army; gone is the prolonged absence of Parliament. By the 1720’s the balance of political power had shifted decisively in favour of Parliament, and the United Kingdom was a constitutional monarchy in a way that would have been unrecognisable to James II. Here, not the military triumph of the New Model Army, was the real victory of parliament over the monarchy.

And of course, it was now the United Kingdom, and this is another hugely important development that was largely a result of the Glorious Revolution. The Scottish Parliament fell into line with its English counterpart in 1689, passing acts deposing James II and recognising William of Orange and Mary II as monarchs, but the union of the two kingdoms was still a personal and not a political one. When William and Mary died without issue and Mary’s sister Anne seemed set to do the same, bringing about the end of the Stuart dynasty, this posed questions about the future relationship between the kingdoms of England and Scotland. In 1701 the English Parliament passed the Act of Settlement which stated that if the main Stuart line came to an end the English crown should go to a granddaughter of James I, the Electress Sophia of Hanover, and her issue. This was not a popular measure with the Scots, who responded with the Act of Security of 1704 claiming their right to dispose of the Scottish crown as they saw fit. In the meantime, on the death of King William in 1702, the new Queen Anne had been prevailed upon to deliberately delay calling a Scottish Parliament so that war could be declared upon France as quickly as possible without possible objection from the Scots.

A union of the two crowns was deemed by an increasing number of influential people to be the best way to stabilise relations between England and Scotland. As early as the 1690’s there had been suggestions of union from both sides of the border; the likelihood of Queen Anne’s death without issue made a settlement imperative, and the Act of Union was duly passed in 1707. The United Kingdom was born in what was arguably an act of immense self-denial by the Scots. With just 45 seats in the new Parliament, which would still sit at Westminster, at a time when their population should have given them more than twice as many, it seemed unlikely that the Scots would find it easy to have their national issues discussed with much urgency; at a stroke the followers of the Old Pretender – the Jacobites as they had come to be called – became the guardians of Scottish National identity, in a way that they remained for generations to come.

So, the events of 1688-9 were definitely revolutionary by any meaningful definition of the term. And yet, glorious. The British on the whole don’t much care for revolutions. Abrupt regime change has always been seen as something rather vulgar and unpleasant that dodgy foreigners in less blessed lands are more likely to indulge in. The French Revolution, even in its initial stages when characterised by a National Assembly seeking to establish a constitutional monarchy very like that of the UK, was looked upon askance by the British establishment. Revolution meant riot and mob rule, the lower orders taking it upon themselves to question an existing order that was ordained by God and entirely satisfactory to the small but powerful and articulate elite at the top. And yet in 1688 we persuaded ourselves that a revolution was glorious. And this was entirely down to the fact that, though dramatic in its political changes, the fundamental economic and social relationships between the classes of England were untouched. Popery and absolute monarchy had been defeated, and there was nothing to rock the boat under the nobility and squirearchy of England. In Ireland, indeed, the effect of the Glorious Revolution was largely to underline and extend the dominance of the Protestant minority over the much larger but now all but disenfranchised and dispossessed Catholics. It was easy to formulate reasons to approve of a revolution that benefitted all the right people. The moral and legal slipperiness of the lawyers and theorists of 1688-9 is no more than the rhetoric that is used by vested interests in any era when seeking a way to justify the status quo.

“These are not men! They are demons!”

As someone who has always been somewhat obsessed with the martial side of history (just ask anyone who knows me), one anniversary that has caught my eye today is the 185th birthday of the French Foreign Legion, founded on March 9th 1831 by King Louis Philippe of France to serve in the growing French colonial possessions in North Africa. Made up from the start of foreigners, the royal ordinance that established it specified that it could only serve outside Metropolitan France, and it was a handy way to usefully dispose of all the foreign regiments of an army that was becoming less Royal and more National. So, in late 1831 the Legion was shipped off to Algiers and a legend was born.


Beau Geste. Those propped up corpses.

Think of the French Foreign Legion now and you think of the old movies; tough, desperate soldiers in their blue coats, white kepi covers and baggy white or red pantalons, marching endlessly under the fierce glare of a desert sun or braving the fury of savage desert nomads. And this is indeed how I always regarded them.

I don’t think I ever read Wren’s classic Beau Geste, and I still couldn’t tell you much about the plot, but I never failed to watch the movies on TV. The part that always resonated was where Fort Zinderneuf and its small garrison was under attack by Tuareg nomads; each attack would leave the garrison more depleted, and the brutal sergeant responded by propping up the dead on the battlements, rifles perched on the top of the parapet to make the Tuaregs think they were still alive.

Even as a boy this struck me as a questionable strategy; the Tuaregs would soon have twigged that these guys were not actually firing their rifles and, given their somewhat relaxed posture, it wouldn’t be long before one of them worked out that this was because they were in fact dead. Still, it struck us as pretty cool anyway; so, as soon as the movie was over we’d get out the Airfix Foreign Legion Fort, Fort Sahara – which I always regarded as one of the more handsome not to mention more functional of the Airfix HO/OO forts – and replay the Tuareg attacks, complete with propped-up corpses on the ramparts. One problem we always had was that as the soldiers all had large square bases it was difficult to make the corpses lean realistically against the battlements; sometimes they would rock back on their bases and end up upright again, and as we could never really remember which soldiers were dead and which still alive, these soldiers would effectively come back to life and continue to defend the fort as gruesome zombie legionnaires.

While French North Africa remained the home of the Legion right up until its independence in the 1960’s, it fought in most if not all of the wars France waged both in Europe and in its far-flung colonies, including The Crimean War and the Italian War of Independence of 1859 (where France was a less than disinterested patron of the nascent Italian state in its struggle with the Habsburgs).

A campaign in which the Legion fought with particular distinction was Napoleon III’s ill-advised intervention in Mexico in the 1860’s, when the French, with some initial support from their European neighbours, sought initially to compel the bankrupt Mexican government to pay its debts to its European creditors, and latterly to create a Mexican Empire under the ill-fated Habsburg prince Maximilian. Maximilian, despite some support from conservative Mexicans, was kept in power against the resistance of the Mexican Republicans largely by French bayonets, which included those of several redoubtable battalions of the Legion.

The most legendary battle in the annals of the Legion occurred in Mexico, at the Hacienda Camaron, on April 30th 1863. Captain Jean Danjou, having been informed of a French supply convoy headed towards the beseiged town of Puebla, decided to accompany it with a company of legionnaires. While still on the way to join the convoy, this tiny force, 3 officers and 62 men, was intercepted by no less than 3,000 Mexican Republican troops and forced to take up defensive positions in an adobe hacienda by the side of the road. When summoned to surrender, Danjou refused, and then, sharing some wine with his men, urged them to take an oath to fight to the death rather than surrender. This was no mere bravado. Danjou hoped by his action to keep the Republican force from locating and capturing the supply convoy.

In the ensuing battle Danjou’s men held off the vastly superior Republican force for several hours. When all their ammunition was exhausted, the five men who were left made a bayonet charge on the beseiging army worthy of the last scene of Butch Cassidy. Impressed by their courage, the Mexicans held their fire, and three of the men survived, along with 16 who had been captured earlier in the retreat to the hacienda. 43 of the 65 died, but they inflicted some 500 casualties on the Mexicans. “These are not men! They are demons!” the Mexican commander is said to have exclaimed. Camaron is a passage of arms famous in French military lore, an episode to rank alongside the last stand of the Old Guard at Waterloo. Danjou himself died early in the action, but his prosthetic left hand was returned to the Legion and became something of a sacred artifact, being prominently housed in the Legion’s Museum of Memory. Today the Legion celebrates April 30th as Camaron Day, and Danjou’s hand is paraded in its protective box; the legionnaire deemed worthy of bearing this precious relic on the day is the recipient of an honour without parallel.


The final bayonet charge at Camaron

And yet as always the romance of the legend is at odds with the brutal reality of life in the Legion. Less stirring than the tale of the fight at Camaron, if equally remarkable in its own way, is the Legion’s involvement in the Second French expedition to Madagascar in the 1890’s, where the French expeditionary force suffered just 25 battle casualties but lost some 5,000 men to malaria, dysentery, typhoid and other tropical diseases – a ratio of 200 men lost to disease for every 1 killed in battle. And when not on campaign the legionnaire, confined to the austere loneliness of the barracks and a life of unremitting tedium, was subject to a suicide rate that was always a matter of concern.

The culture of the Legion, the emphasis upon esprit de corps and a sense of otherness, of loyalty only to itself, combined with the harshness of the training, has often produced soldiers capable not only of astonishing toughness and courage but of appalling ruthlessness and cruelty. European powers in their colonial wars were rarely guilty of restraint, but in the Foreign Legion the French had a weapon that was powerful but indiscriminate. Its members were (and are) expected to be obedient to their mission unto death, unthinking and uncomplaining and this unflinching obedience to orders has involved them in atrocities, such as those carried out in Algeria during the War of Independence there, which have tarnished the name of the French military.

I am minded to quote the historian Max Hastings:

“The world contains more misfits, sadists, masochists, and people who enjoy fighting than we sometimes like to suppose. How else can one explain the fact that the French Foreign Legion is heavily overrecruited?” (The Hard Truth About The Foreign Legion, NY Review of Books, October 14 2010)

Hastings goes on to rather unexpectedly make a point that is seldom if ever heard in these days when there is an almost religious reverence for the military, when service of any kind seems to be enough in the public mind to transform a soldier overnight into a hero whose actions are beyond criticism:

“In some respects, the Legion has less claim to uniqueness than is sometimes supposed. Most men who enlist in their own national armies are no more and no less mercenaries than legionnaires. Few join to serve the flag or their nation’s honour. For the most part, they do so because they cannot find any better way to make a living, and find the rigors of service life less onerous than coping with the daily choices and decisions demanded of a civilian. The wars of all nations with volunteer armies are fought mainly by their underclass. This helps to explain why, on shedding their uniforms, so many veterans lapse back into poverty, psychological problems, or even criminality.”

That there is a certain type that thrives in an organisation like the Legion should not surprise us, but it is not likely to be a type we should admire, let alone aspire to emulate.

Perhaps it is best to leave the last word to another incurable romantic:


On the end of curing.

Standing, on a damp, grey winter morning, aboard a crowded commuter train surrounded by passengers who are coughing, sneezing or otherwise succeeding in spreading their microbe-rich sputum across as wide an area as possible, I am struck by a somewhat unnerving article I read on the BBC web site a couple of weeks ago. The article reported that we may shortly be living in a world where antibiotics have lost their power; where, due to a genetic mutation called MCR-1, bacteria have become able to shrug off even the most powerful of antibiotic drugs, where there are infections that are quite simply untreatable. The academic quoted in the article, Professor Timothy Walsh, expressed the issue in as matter-of-factly a way as academics are wont to do:

“All the key players are now in place to make the post-antibiotic world a reality. If MCR-1 becomes global, which is a case of when not if, and the gene aligns itself with other antibiotic resistance genes, which is inevitable, then we will have very likely reached the start of the post-antibiotic era. At that point if a patient is seriously ill, say with E. coli, then there is virtually nothing you can do.”

This rather undramatic appraisal was in stark contrast to a less restrained comment made earlier in the article by the writer: “Bacteria becoming completely resistant to treatment – also known as the antibiotic apocalypse – could plunge medicine back into the dark ages.”



Plague in sixth century Constantinople

The reference to the Dark Ages resonates. We can hardly help but shudder when we read such words and reflect on the suffering that disease has wrought upon our species in the past. Or at least, as a lifelong hypochondriac, I cannot help but do so.

As a culture we have something of a fascination for apocalypse, and the microbial apocalypse has something of an honoured status within that genre; from the publication of the novel I Am Legend in 1954 via a host of lurid paperbacks in the 1970’s with graphic depictions of suffering and death (as well as the ubiquitous gratuitous sex scenes) and bearing titles like Rabid and Day Of The Mad Dogs, to more modern works that rework the familiar zombie trope with an epidemiological underpinning, the potentialities of disease horrify and hence enthrall us. It is worth noting that this came about at a time when the existential threat we faced from disease was lower than it had ever been before; with the discovery of antibiotics a whole host of previously life-threatening conditions had been tamed, the polio vaccine had been discovered, smallpox was well on the way to being eradicated and average human life expectancy was lengthening by the year. Apocalyptic disease became an exciting diversion to those who had increasingly less to fear it. I would imagine, though I cannot prove, that the prospect held less fascination for those outside the privileged confines of the developed world where premature death from a treatable infectious disease remained, and remains, a much more likely event.

Europe has of course on more than one occasion seen pandemics that easily justify the adjective ‘apocalyptic’. The most dramatic, probably the most iconic and possibly the most far-reaching in terms of its consequences was the plague, which spread outwards from Asia in three cataclysmic spasms during the millenium and a half between CE 540 and the 1950’s, each outbreak trailing behind it a number of aftershocks, subsequent smaller and more localised eruptions that sufficed to keep the population who had survived the initial outbreak in a permanent state of miserable apprehension; and in some cases to keep local population growth in a state of permanent abeyance for generations.

The first plague pandemic reached Europe via the East Roman Empire in the 540’s and has become known as the Plague of Justinian after the ruler of the time. There used to be a great deal of debate about what pathogen caused the Plague of Justinian, but the accounts that the contemporary writer Procopius have left are very suggestive of a strain of Yersinia Pestis, a virulent bacterium, a form of which was also behind the better known Black Death pandemic eight centuries later and is still causing trouble in many parts of the world to this day. Yersinia Pestis was finally identified beyond all doubt as (at least one) culprit early in the twenty first century after DNA tests on the bodies of sixth-century plague victims in Aschheim in Bavaria.


Yersinia Pestis up close

Procopius says of the symptoms:

“a bubonic swelling developed there in the groin of the body, which is below the abdomen, but also in the armpit, and also behind the ear and at different places on the thighs… Up to this point, then, everything occurred in the same way all who had taken the disease. But from then on very distinct differences developed for there ensued for some a deep coma, with others violent delirium…For those who were under the spell of the coma forgot all who were familiar to them, and seemed to lie, sleeping constantly…And in those cases where neither coma nor delirium came on, the bubonic swelling became worse and the sufferer, no longer able to endure the pain, died…In some cases death came immediately, in others, after many days; and with some the body broke out with black pustules about as large as a lentil and these did not survive even one day, but all succumbed immediately. Vomiting of blood ensued in many, without visible cause, and immediately brought death…”

The disease took a lethal grip on the imperial capital Constantinople, and, again according to Procopius, at its peak was killing 10,000 people a day there. While this figure seems rather high, there are accounts of huge plague pits being dug that proved inadequate to contain the dead, and John of Ephesus tells us that corpses were placed in the towers of the city walls and left inside houses to rot.

The plague ravaged the East Roman Empire, killing an estimated third to a half of its total population, before spreading westwards into the successor states of the old Western Roman Empire. Early Medieval Western Europe, less urbanised and less commercially developed at this time than the Eastern Empire, might well have offered fewer vectors of transmision and hence in all probability suffered rather less, and in fact there is little evidence that it penetrated much beyond the trading cities of the Mediterranean coasts. That said, the ‘Yellow Plague of Rhos’ pops up in the remotenesses of Wales, where it is said that it killed off King Maelgwn of Gwynedd.

To the Eastern Empire the effects of the plague are often thought to have been disastrous and to have signalled the start of a long period of decline that within a century or two reduced it to the status of just one more among the successor states squatting on the ruins of the empire of Augustus and Constantine. Before it arrived Justinian ruled an empire that stretched from the Atlantic to the Euphrates and was close to victory in his campaign to recover Italy from the Visigoths. The plague, it is said, derailed the Gothic War, which ran on for another decade and produced an Italian peninsula ravaged and exhausted and ill-prepared to resist the next invasion, that of the Lombards, who came over the Alps just three years after Justinian’s death. Worse still, the loss of such a high proportion of the population. and the attendant economic crisis, weakened the empire to the extent that it could not resist the near-fatal simultaneous encroachments of Persians and Avars early in the seventh century, followed almost immediately of course by that of the Muslim Arabs which permanently stripped the empire of all of its Levantine and African provinces.

One notable fact about contemporary accounts of Justinian’s Plague is that we find recorded therein the first division of the disease into three distinct forms that is so common an observation in the more numerous accounts of the later Black Death visitation. The forms the disease takes depends upon which of the body’s system Y. Pestis infects.

The first and most common form is an infection of the lymphatic system, and this produces the condition known as Bubonic Plague, with its iconic and terrifying symptoms of painful buboes (swollen and infected lymph glands) appearing in groin and armpits and on the neck. Yet despite the horror of its symptoms, Bubonic Plague was and is the least dangerous of the three forms. Medieval writers may have depicted the appearance of the buboes as a sure premonition of death, but the fact is that, even untreated, a healthy adult has a fighting chance of surviving the condition. It is alas an unfortunate fact of medieval life that relatively few of the victims would have been healthy adults with wholly uncompromised immune systems, especially when the Black Death visitation arrived in Europe after a generation of near-constant famine. Death from Bubonic Plague was strikingly horrible and the victim would often linger for days after the buboes appeared.

The second form is an infection of the respiratory system and became known as Pneumonic Plague. Whereas Bubobic Plague is carried from host to host by the bites of fleas, Pneumonic Plague is transmitted by inhaling the sputum of an infected person via their coughs and sneezes. Even in modern conditions Pneumonic Plague carries a 90-95% mortality rate, and in medieval times recovery must have been virtually unknown. It works much more quickly that its Bubonic cousin, causing respiratory failure long before any buboes have had the chance to form (as Boccaccio commented, “How many valiant men, how many fair ladies, broke fast with their kinfolk and the same night supped with their ancestors in the next world!”).

The third and final form is an infection of the blood and is called Septicaemic Plague. This is the rarest form of the plague and as deadly as the Pneumonic variety, with a mortality rate if untreated of virtually 100%. Y. Pestis in the blood causes it to fail to clot, leading to bleeding into the skin and internal organs and giving rise to the most visible symptom, spreading red or black rashes across the skin. Other alarming symptoms include vomiting of blood and necrosis – tissue death – leading to gangrene at the body’s extremities.


Necrosis in the hands of a plague patient

Justinian’s Plague hung around like a bad smell for a couple of centuries, Y Pestis becoming endemic across much of Europe and causing occasional human epidemics until fading away sometime in the eighth century. It then seems to have disappeared from the collective consciousness until it returned with a vengeance in the mid-fourteenth century as the Great Plague, or the Black Death as it later became known – the phrase was never used at the time. Contemporary descriptions of symptoms and the three recognisable forms of the disease leave us in no doubt as to its identity, although there are just enough differences from Procopius’ accounts to establish that Y Pestis had evolved and changed since its last visit, morphing into something if anything even more virulent and deadly.

The fourteenth century outbreak was on a cataclysmic scale, killing, at conservative estimates, a third or more of Europe’s entire population over the course of 3-4 years and reducing whole areas to wilderness. Anything that so quickly removes such a large proportion of the population can not fail to have huge consequences, and the social, demographic, economic, political and cultural reverberations of the Black Death were far too sweeping and varied to even be touched upon here. But it is worth mentioning that, just like the Plague of Justinian, the Black Death didn’t just do its dreadful work and then disappear. Once more Y Pestis became endemic; the unfortunate survivors of that first outbreak of 1347-1350 had scarcely congratulated themselves upon their survival and started to rebuild their lives when a second epidemic, almost as severe as the first, descended upon them in 1360-61. Thereafter there were regular outbreaks, the effect of which was to put a long term damper on population increase and add another hideous element to the ceaseless trials of the hapless medieval peasant. The plague again lingered on in Europe for centuries, one of its parting shots being the Great Plague of London in 1665 that killed some 100,000 people, or about a fifth of the population of the city at the time.

While most are familiar with the Black Death due to its prominence in cultural depictions of the medieval period, fewer are aware that there was a third pandemic of plague that emerged in China in 1894 and spread from there to neighbouring countries. The victims of the third pandemic, which claimed some 15,000,000 lives before it was declared over in 1959, were mainly in South-East Asia, but not exclusively so. In 1900 plague came to Australia where the first major outbreak occurred in Sydney, its epicentre at the Darling Harbour wharves, spreading to the rest of the city and causing 100 deaths. There were in fact no less than 12 major outbreaks of plague in Australia from 1900 to 1925 with 1,371 cases and 535 deaths. At about the same time trading ships brought the disease to Glasgow, where 16 people died, and to San Francisco, after which it became endemic among the wild animal population in the US. It was also during this third pandemic that the causative organism was discovered by the French bacteriologist Alexandre Yersin, who has consequently been awarded the dubious honour of giving his name to a bacterium that has killed hundreds of millions of humans at least.


Rat-catchers in plague-stricken Sydney

Given the levels of medical sophistication with which the twenty-first century westerner has grown up, it is hard to imagine the feeling of terrified helplessness with which our unfortunate ancestors viewed the plague. It was so capricious and merciless in its effects that it is small wonder that they could explain it in no other terms than the punishment of a God wrathful with the sins of His own creation. Historians deal with cause and effect, with figures and demographics, but it must be remembered that behind every number was a suffering human being, a traumatised family. I am reminded of the account of Agnono di Tura of the Black Death in Siena, particularly the terse but horrifying last sentence:

“The mortality in Siena began in May. It was a cruel and horrible thing. . . . It seemed that almost everyone became stupefied seeing the pain. It is impossible for the human tongue to recount the awful truth. Indeed, one who did not see such horribleness can be called blessed. The victims died almost immediately. They would swell beneath the armpits and in the groin, and fall over while talking. Father abandoned child, wife husband, one brother another; for this illness seemed to strike through breath and sight. And so they died. None could be found to bury the dead for money or friendship. Members of a household brought their dead to a ditch as best they could, without priest, without divine offices. In many places in Siena great pits were dug and piled deep with the multitude of dead. And they died by the hundreds, both day and night, and all were thrown in those ditches and covered with earth. And as soon as those ditches were filled, more were dug. I, Agnolo di Tura . . . buried my five children with my own hands. . . . And so many died that all believed it was the end of the world.”

Y Pestis is still with us, with 1-2,000 cases of plague in humans being reported to the WHO every year and an overall mortality rate of about 8-10%. Apart from a much higher standard of public health and hygiene, the main protector we have against plague that was not available during the three pandemics is of course the range of antibiotics to which most cases of plague respond very well if administered in time. It’s this that kind of worries me when I read about the emergence of antibiotic-resistant genes in bacteria. To have the scenes described by Procopius and Boccaccio re-enacted on the streets of London would be a scenario worthy of a lurid 1970’s apocalypse novel. Although it would presumably mean plenty of vacant seats on empty trains, so I guess to everything there is an upside.

Useful Saints.

For the whole of the Middle Ages, and later still in countries that did not embrace the Reformation, the annual calendar was arranged around a combination of movable feasts (such as Easter), immovable feasts (such as Christmas) and a host of feast days associated with one or more of a veritable army of major and minor saints.


Saints Perpetua and Felicity; demoted in favour of Thomas Aquinas

These feast days were from the first a way of commemorating the martyrdom of an individual and were generally observed on the date of their death where known, or on a chosen date where not. Eventually sainthood was extended to those who had died natural deaths but lived lives that indicated a dedication to Christ – these were called confessors and by tradition the first confessor saint was the fourth century Bishop Martin of Tours.

But the Early Church was nothing if not generous with its canonisations, and as the number of saints, be they martyrs or merely confessors, proliferated during the Late Roman and Early Medieval periods, it wasn’t long before every day of the year was the feast day for at least one saint.

This led to some rearrangements that seem amusing now, as saints’ days were moved around to accommodate others, and some saints were even demoted, their feast days disappearing from the liturgical calendar altogether; how that impacted on their status in Heaven can only be guessed at. As an example, the third century African martyrs Saints Perpetua and Felicity originally had their feast day on March 7th, but this was later appropriated by the much bigger hitter St. Thomas Aquinas. Perpetua and Felicity had to make do with a much less prestigious commemoration until 1908 when Pope Pius X moved their feast day to March 6th. In 1969 Aquinas was moved to January and the two African martyrs were returned to March 7th; the differing traditions that had developed in the meantime meant that both the 6th and 7th of March could be seen as their feast days depending on where you found yourself worshipping.

The idea of sainthood has always bemused me; the idea that an act of a group of men can somehow elevate an individual to an exalted relationship with the supposedly unknowable God. But then there is much to bemuse about the various traditions that have developed around religions.

The bemusement only increases when one examines the sort of people who did become saints. What put me in mind of this was the discovery that November 14th is the feast day of Saint Justinian, who is none other than the sixth century Roman Emperor Justinian I (CE 527-565).

Now Justinian is certainly a major historical figure of Late Antiquity. Eastern Emperor at a time when the Empire was in a transitional state between Classical Rome and Medieval Byzantium (he is believed by some scholars to be the last emperor whose native tongue was Latin as opposed to Greek), he was driven by the desire to achieve the renovatio imperii or Restoration of the Empire by recovering the western territories that had been lost to the various ‘barbarian’ peoples during the previous century. To this end his general Belisarius swiftly conquered the Vandal Kingdom of North Africa, and a much longer and harder, if ultimately successful, struggle was fought against the Ostrogothic rulers of Italy (a struggle that devastated the hitherto largely prosperous Italian peninsula and reduced the City of Rome to a pathetic shadow of its former self, a city of starving refugees sustained mainly by Papal largesse and with a population a fraction of what it had been a few centuries previously). Ultimately the southern part of Visigothic Spain was also brought back under the Imperial remit, and Justinian’s Empire was only missing the old western provinces of Northern Spain, Gaul and Britain.

Yet, though Justinian conquered large swathes of territory and restored the prestige of the name of Rome, a saintly man he wasn’t. This should not surprise us. To be a successful ruler in the Middle Ages involved a degree of ruthlessness and callousness wholly incompatible with any popular idea of sainthood. We may be sceptical about the claims of the contemporary historian Procopius, whose Secret History portrays an emperor both vindictive and incompetent, and in fact suggests that he might be a demon : “And some of those who have been with Justinian at the palace late at night, men who were pure of spirit, have thought they saw a strange demoniac form taking his place. One man said that the Emperor suddenly rose from his throne and walked about, and indeed he was never wont to remain sitting for long, and immediately Justinian’s head vanished, while the rest of his body seemed to ebb and flow; whereat the beholder stood aghast and fearful, wondering if his eyes were deceiving him. But presently he perceived the vanished head filling out and joining the body again as strangely as it had left it.”) But we can judge the man by his actions, and Justinian was not a man to worry about scruples when threatened. An early threat to his reign came in CE 532 when the Blue and Green factions of the Circus (the fiercely partisan supporters of the Blue and Green chariot racing teams respectively) combined in the Nika Riots. Justinian was initially minded to flee, but encouraged by his wife, the Empress Theodora (also a saint, and portrayed as a fiendish whore by Procopius), he instead unleashed his troops in a reaction that eventually saw the massacre of some 30,000 rioters and the destruction of the original Church of the Holy Wisdom, Hagia Sophia.

A later tradition draws upon the emperor’s reputation for vindictiveness and tells us that Belisarius, the conqueror of the Vandals, after his fall from favour was blinded by Justinian and forced to live out his days as a beggar on the streets of Rome. This story is generally believed to be apocryphal but has spawned a number of artistic works; as I write I have a print of Jacques-Louis David’s Belisarius Begging For Alms hanging in the downstairs hall.

Despite being, to put it mildly, a bloodthirsty despot, he had the advantage of being a religiously orthodox one who defended the Nicene Creed against the numerous heretical groups who held opposing views of the nature of Christ. And that made him a saint.

There is also the case of a later Byzantine ruler, the Empress Irene (CE 797-802), who, if perhaps not responsible for deaths on the same scale as Justinian, was similarly a deeply unattractive individual. She ultimately had her son, the Emperor Constantine VI, blinded so brutally that he soon died of his wounds, so that she could rule alone. Historians tell us that there was an eclipse of the sun and a darkness lasting 17 days as God expressed his horror at this act of filicide. And yet this same Irene resolved (for a time) the long-lasting Iconoclast controversy that had been raging in the Empire regarding the worship of images. A succession of emperors, perhaps influenced by their Muslim neighbours to the east, had condemned the worship of religious icons as idolatry and sought to remove them from display or destroy them, a course of action that brought them into conflict with the Western Church and the Papacy. Irene was an enemy of the Iconoclasts and restored the icons to their place of reverence, and hence was regarded with more sympathy by Rome. Unsurprisingly she became a saint; presumably God swiftly came to regret his earlier expression of horror at Constantine’s brutal murder, no hard feelings.


The Empress Irene; murderous mother, and saint.

The point in both cases is that sainthood was bestowed not for any personal qualities the individual possessed, but for their usefulness to the Church, and this is why we find the canonisation of so many monarchs otherwise unlikely candidates for sainthood. The establishment of Christianity as the official (and eventually the only permitted) religion of the Roman Empire led to an early association of the Church with the secular ruler that only grew stronger as the Middle Ages progressed; as the guarantor of orthodoxy and the protector of the church, the emperors came to hold a position of sacred as well as secular importance, a position that was readily transferred to the various kings that replaced them. This sacred dimension to kingship can be seen in, for example, the anointing of successive French Kings with holy oil during the Middle Ages and beyond.

We see canonisation as a reward for service to the Church most clearly in the case of the Popes themselves. From the time of St Peter, traditionally regarded as the first Pope, for the first half-millenium of the Papacy all pontiffs except one are regarded as saints by the Catholic Church (the unfortunate exception is Liberius, Pope from 352 to 366, who at least has the consolation of being regarded as a saint by the Eastern Orthodox Church). A succession of 48 men, by no means all of whom would be regarded as saints today. They include, for example, Pope Damasus I (successor to Liberius; 366-384), who overcame a rival Pope elected on the same day by leading a gang of hired thugs in a lengthy massacre of his opponent’s supporters on the sacred ground of the Julian Basilica. The violence was so extreme that the Western Roman Emperor Gratian had to call on the Prefects of the City to restore order with their troops. Edward Gibbon tells us of his subsequent reign: “The enemies of Damasus styled him Auriscalpius Matronarum, the ladies’ ear-scratcher.” Yet Damasus presided over the period when pagan worship was finally banned in Rome and the Altar of Victory removed from the Senate. Again, this sort of record trumps any personal vices the recipient of sainthood might have had.

There are a number of reasons why a particular person may have been canonised and perhaps we will touch upon some of them in future posts. But secular rulers whose acts have in some way benefitted the church have from very early times been accorded saintly status, and for those who believe in such things the well-documented moral lapses of these individuals must bring the process of canonisation itself into question.

Lions and Donkeys? What to make of the Great War.

Most years my engagement with the rituals surrounding Poppy Day is an unconscious one. I’ll buy my poppy in late October; normally I’ll either lose it or discover it mangled in my pocket and have to buy another. I will try to remember to wear it at all times, then when Remembrance Sunday arrives I will often post a small tribute on Facebook, usually a poem, and watch the parade at the Cenotaph in a suitably reflective mood.


Remembrance Day Parade at the Cenotaph, 1920

Yet while this ritual is an unconscious and more or less mechanical one, it is nonetheless important to me. I have had a lifelong interest in all things military, and my voluminous reading of military history has left me under no illusion about what an appalling thing war is; it seems only right that we take time out to acknowledge the sufferings of those unfortunate enough to have had to experience it. There is a personal connection also. My own father was in the Merchant Navy during the Second World War and, among other things, served on the Arctic Convoys, where his duties included launching aircraft from the decks of escort carriers that had been converted from merchant ships. As a boy I would often chat with my father about the war, normally while watching on the TV one of those old war films that they never make any more; he was always happy to relate the many amusing things he had encountered while travelling the world, though like so many of his generation he would never, at least when sober, discuss the awful things he had witnessed on those convoys.

I became slightly more engaged this year when I was made aware of the amount of ‘poppy-shaming’ that was going on. It became apparent that shameless politicians and jingoistic tabloids were ‘outing’ public figures who for whatever reason failed to wear a poppy in the run up to Remembrance Sunday, many weeks ahead in some cases, and making the wearing of the poppy some kind of test of patriotism. This descended into black farce when somebody at Conservative Party HQ allegedly photoshopped a poppy onto a photograph of David Cameron lest he be seen as disrespectful to our war dead. This self-righteous bullying, this misuse of the idea of remembrance, this desire to make political capital from the dead or to stake out a place on the moral high ground over their bodies, I find deeply distasteful if depressingly predictable, and for this reason, having bought my poppy, I will this year leave it at home unworn. Proud of what my father and others like him did in service to their fellow men, I feel under no compunction to display a symbol in order to demonstrate some childish patriotism or seek the comfort of conformity to some tribal demand.

In a wider context, it seems these days that an annual culture war is being fought over the memory of the Great War. For several generations after 1918 there was essentially one accepted view of the war, the ‘Lions led by Donkeys’ version whereby huge numbers of brave soldiers were sacrificed needlessly and senselessly by bungling ‘chateau generals’ ensconced far behind the front line. The generals were portrayed as not only incompetent but also, as the products of the British aristocracy, indifferent to the suffering of the mainly working class Tommies in the trenches. This was a view already becoming popular among the soldier poets who reflected, sometimes humourously and sometimes seriously, upon those who gave the orders. To cite two of Siegfried Sassoon’s poems, firstly ‘The General’:

“Good-morning, good-morning!” the General said
When we met him last week on our way to the line.
Now the soldiers he smiled at are most of ’em dead,
And we’re cursing his staff for incompetent swine.
“He’s a cheery old card,” grunted Harry to Jack
As they slogged up to Arras with rifle and pack.
But he did for them both by his plan of attack.

and secondly, ‘Base Details’:

If I were fierce, and bald, and short of breath
I’d live with scarlet Majors at the Base,
And speed glum heroes up the line to death.
You’d see me with my puffy petulant face,
Guzzling and gulping in the best hotel,
Reading the Roll of Honour. “Poor young chap,”
I’d say — “I used to know his father well;
Yes, we’ve lost heavily in this last scrap.”
And when the war is done and youth stone dead,
I’d toddle safely home and die — in bed.

Oh What a Lovely War

Scene from Oh, What a Lovely War

The ‘someone has blundered’ viewpoint became predominant in the popular culture of the generations that followed, exercising to an extent a moderating influence over those who led the British armies of the Second World War and winning further currency later, in the 1960’s through the ‘O, What A Lovely War!’ musical and in the 1980’s through the ‘Blackadder Goes Forth’ TV series. Growing up in the 1960’s and 1970’s, this was the view of the Great War that I was taught at school, with the result that it never occurred to me that any other view was even possible.


Unburied soldier on the Western Front. The picture used on the cover of the 1967 printing of AJP Taylor’s Illustrated History Of The First World War

But there was always an opposing view that had its adherents among professional historians even if it was largely unrepresented in popular culture. In this version it was not so much the case that the generals were callous and incompetent, rather that they were faced with an unprecedented situation in which heavy casualties were all but inevitable; and that in fact lessons (albeit hard ones) were learned during the course of the war which enabled the western allies finally to defeat Germany in 1918. The portrayal of individual campaigns was challenged. The popular view of the Somme, for example, as a wasteful failure which, despite the much-quoted 60,000 British casualties on the first day alone, the generals insisted on continuing over several nightmarish months of massive casualties for negligible territorial gain, has been regarded somewhat differently: as a battle that inflicted serious damage upon the German Army from which it never recovered. Further, as a grim necessity in order to prevent the collapse of the French positions under heavy German pressure at Verdun; and as the French positions did not collapse, The Somme can from this perspective be seen as a success, if a hideously costly one. This view seems to be ever more common, with a number of renowned historians now falling into what is being termed the ‘revisionist’ camp.

In history one must be careful to avoid lazy assumptions, and it is inevitable and proper that the accepted versions of history of one generation be challenged by the next. The debate is to be welcomed, although it is undeniable that in both camps there are those motivated by politics as much as historical veracity – the comments of Michael Gove last year on the occasion of the centenary were particularly crass (the idea for example that the war was a justifiable one fought to preserve democracy against tyranny seeming frankly bizarre when one considers that one of Britain’s allies was Czarist Russia).

The revisionist camp make a number of good points. First of all, they are correct that whatever happened after the major powers blundered into war in 1914, heavy losses were extremely likely if not inevitable. The popular view that it would be over before the leaves fell was never likely in the face of the millions being mobilised and the advanced state of contemporary military technology. As Max Hastings makes clear in his book ‘Catastrophe: Europe Goes To War 1914’ (one of many more or less ‘revisionist’ works that came out at the time of the centenary) the continental powers were resigned to a war involving heavy losses; they factored this into their strategic decisions and the huge conscript armies that they fielded could absorb the expected casualty rate. In this scenario Britain’s tiny professional army was always likely to also lose heavily and to need similarly large conscript forces to be made available to make good the losses. That said, Germany arguably came within a whisker of a decisive victory in the west in 1914, and if some strategic decisions had been made differently, and the Battle of The Marne had as a result gone the other way, who can say what might have ensued? But certainly after the front solidified across Belgium and North-Eastern France that Autumn, and millions of men were left dug into trenches less than a mile from one another, with huge concentrations of artillery at their backs, it is hard to envisage how any continuation of the fighting could not have resulted in very heavy loss of life.

It is also true that at some level lessons were learned during the course of the fighting, and the Allied offensives of the ‘Hundred Days’ offensives of 1918 that brought the war to a conclusion have a different feel to those of 1916.

While I acknowledge all this, it does not to me quite let the British generals off the hook.

I think it’s fair to say that at few moments in recent history have senior British commanders shown much in the way of imagination or creativity. Though our nation did produce Marlborough, Wolfe and Wellington, it is difficult to think of a British commander since 1815 whose strategic or tactical decisions display any touch of genius. As Blackadder sagely observed, The commanders of Queen Victoria tended to win their battles mainly due to the fact that they were equipped with rifles and artillery while their opponents for the most part had spears, swords or assegais, and even then there were notable debacles like Isandhlwana in 1879. When they came up against wily opponents armed with firearms, the Boers, the very uninspiring British generals who commanded in South Africa were given a very embarrassing bloody nose. The situation that developed in 1914, the trench warfare, was indeed unprecedented, but there were still general principles that held true. In particular, you really didn’t have to be a genius to realise that massed frontal attacks against carefully prepared defensive positions were most likely to result in heavy losses for little gain. This was all too evident from American Civil War battles like Cold Harbor and Franklin, or Boer War battles like Modder River and Magersfontein, all situations where the defenders did not even have machine guns and were defending positions that they had only had days or weeks to fortify, not years.

The armies of the Great War, of course, had the advantage of fearsome arrays of very large artillery pieces, and the British commanders’ hope was always that a massive preliminary bombardment would break up the defences and make it possible for the attacking infantry to advance unopposed. But here again elementary mistakes were made; the reconnaissance after the bombardment was necessarily a brief affair as the infantry had to go in quickly before the enemy could recover and resume their positions, and in many cases it seems to have been perfunctory to non-existent. The assumption always seemed to be that the bombardment would do the trick, and it was left to the attacking infantry to make the grim realisation that well prepared positions can take a surprising amount of shelling without collapsing.

Finally, and most unforgiveably, was the repetition of the same grim mistake. Einstein famously defined insanity as doing the same thing over and over again and expecting different results, and by this measure the allied high commands were certainly insane. What had failed so monstrously to produce gains on the Somme was unlikely to garner greater rewards the following year at Passchendaele, which makes the latter offensive indeed seem like an extraordinarily profligate waste of human life. The only possible context in which there is any prospect of justification is that of an attritional strategy; only if you think the enemy is suffering at least as heavily as you are and is less easily able to replace his losses, is there any point to this. A good friend and fellow historian/gamer was at the Imperial War Museum earlier this week and saw a memo written by General Sir Douglas Haig shortly before the Somme Offensive, in which he apparently downplayed the prospect of decisive victory and suggested instead that the attack should be seen as a way of seriously weakening the Germans in preparation for a decisive stroke the following year. This indeed is now seen as one of the claims to ‘victory’ at the Somme – while few gains were made, the damage inflicted on the German Army was grievous and made it less able to stand the strain later in the war. There may well be truth in this, but I really can’t regard the willingness to accept heavy casualties in the hope that your enemy is hurting even worse as a mark of a skilful general.

That leaves the claim that the generals learned from their mistakes and fought with more wisdom as the war neared its end. It is clear that infantry tactics evolved considerably between 1916 and 1918, with more flexible techniques involving infiltration by small groups, creeping barrages and suppressing fire instead of the indiscriminate pummelling of the Somme, and the deplyment of new supporting weapons such as tanks and aircraft. It is questionable how much of this evolution was the product of decisions at GHQ and how much the spontaneous response at lower levels of ordinary soldiers seeing their comrades killed in such terrifying numbers around them. As the military writer Wilhelm Balck observed, ‘Bullets quickly write new tactics’. While improved tactics made the Allied offensives of 1918 less horrendously costly than earlier ones, it seems to me that the real reason for the German defeat in 1918 was the final exhaustion of the German Army, having shot its bolt in its final great offensive, the Kaiserschlacht, compounded by the growing material superiority of the the Allies as American troops and materiel began to appear on the battlefield. That, and the economic exhaustion of Germany and the subsequent political turmoil that ultimately produced the revolution of 1918-19.

That the high command lacked the imagination to adapt to the new conditions of industrialised warfare does not necessarily mean that they were indifferent to the plight of the ordinary soldiers; or at least, no more indifferent than senior commanders have been at any time in history. We have seen that Douglas Haig was perhaps inclined to think in terms of the brutal calculation of attrition, something that has earned him a good deal of criticism both at the time and since, but that may charitably be seen as realism given the situation. There are times when a commander has to make the call as to whether the losses he is likely to incur are worth taking for the the potential gains, and this is why being a decent human being is not at all the same as being a good military commander. The most acclaimed captains of history have all had to show this callousness on occasion, and it was also a factor in the calculations of the commanders who in the Second World War were consciously trying to avoid the carnage they had seen a generation before in Flanders and Picardy. Add to that of course the fact that while most ordinary Tommies were working class, most of their officers were Public School boys of the same class as the hapless generals at GHQ. Officer casualties were horrendous on the Western Front; middle and upper classes were at least as likely as working class ones to suffer the loss of a son, and for this reason it is hard to charge the high command with indifference on the basis of class.

It is of course easier to criticise than to suggest, and as stated previously the situation after 1914 was unprecedented and almost certain to result in large loss of life whatever strategy was adopted. So what should the generals have done? Any answer is going to benefit from hindsight, but I always thought that given the unlikelihood of a breakthrough on the Western Front the Allies would have been wise to pursue a more Churchillian strategy of concentrating on Germany’s weaker allies, the ‘soft underbelly’ strategy that found expression in the Gallipoli Campaign. While that was of course another bloody failure, my understanding is that it might not have been, had local commanders acted with more daring and imagination. A victory in the Balkans might have had little immediate effect on the poor bloody infantry in Flanders, but knocking Turkey out of the war would at the very least have enabled Russia to concentrate more of its forces against Austria-Hungary and perhaps preciptated a domino effect. But then again, once the Western Front solidified the only sane way forward would have been for all the powers involved to seek an end to the fighting at any cost.

It is odd that discussions about a century-old conflict should be so politically charged, but I find that patriotism is something that seems to exert a weird kind of gravity that slightly distorts anything that approaches it. To those who see something symbolic in the honouring of old soldiers, who see it as less of an expression of thanks and regret towards afflicted humans and more of an expression of tribal identity, criticism of how a war is waged is often, along with any suggestion that a conflict might have been unjustified or avoidable, seen as a betrayal of those who fought in it. History is ill-served by politicians, and as someone on the left of the political spectrum I would confess that I have had to overcome some knee-jerk responses provoked by comments like the ones that Michael Gove made in order to examine the ‘revisionist’ arguments concerning the Great War with an open mind and to acknowledge that, while I do not entirely agree with them, they do have value; and that perhaps the simple old Lions and Donkeys model that I grew up with is in need of some nuance.

Edward Gibbon; the First Historian

“It was at Rome, on the fifteenth of October 1764, as I sat musing amidst the ruins of the Capitol, while the barefooted friars were singing Vespers in the temple of Jupiter, that the idea of writing the decline and fall of the City first started to my mind.”


This was the “Capitoline vision” of Edward Gibbon, the celebrated – one might in many ways say unsurpassed – historian of Rome and the author of the definitive Decline And Fall Of The Roman Empire; the germ of an idea that would lead to the creation of a magnum opus published in six volumes between 1776 and 1788. As he makes clear, his first intention was to write a history of the city which had so caught his imagination but – fortunately for posterity – this project was later extended to cover the history of the whole Roman Empire, right down to the conquest of Constantinople by the Turks in 1453.

In fact biographers of the great historian are in some doubt that this moment of inspiration really happened, at least at the time that Gibbon claims it did, as there is nothing in his journal to corroborate his account. It remains a lovely image though; the enlightened historian sitting ruminating amidst the ruins of a great civilisation while the benighted monkish usurpers of that civilisation whisper their mummery around him.

The book itself is quite simply a masterpiece. I have always thought that anyone with an interest in human history, or in fact anyone with a love for the English language, should read it at least once in their lives; I have managed it twice, and hope to live long enough to get in a third pass. The language of the book is Enlightenment English at its most florid, which can awe and amuse even as it so beautifully paints its pictures of the world of late antiquity; a random selection from the Everyman Edition giving us this description of the emperor Septimius Severus:

“The uncommon abilities and fortune of Severus have induced an elegant historian to compare him with the first and greatest of the Cæsars. The parallel is, at least, imperfect. Where shall we find, in the character of Severus, the commanding superiority of the soul, the generous clemency, and the various genius, which could reconcile and unite the love of pleasure, the thirst of knowledge, and the fire of ambition?”

When I was a sixth former and had but recently discovered the pleasures of the Decline and Fall, I had a very fine Latin teacher who, knowing of my love for the book, used to set extracts from it for me to translate into Latin. The translation hence became a two-stage process, the first stage being to translate from Gibbon’s exuberant prose to workaday modern English, before rendering it into correct Latin.

The tone Gibbon takes throughout the work is that of an educated English gentleman of the Age of Reason. Ironic, detached and yet judgemental, he makes no attempt, as a modern historian would, to take an objective view of the attitudes and motivations of the characters under study. His frequent aphorisms reflect the prejudices of the age and class to which he belonged and mark him very clearly as a prototypical ‘Whig’ historian to whom the political and social values of eighteenth century England represented the highest pinnacle of human development:

“The army is the only order of men sufficiently united to concur in the same sentiments, and powerful enough to impose them on the rest of their fellow-citizens; but the temper of soldiers, habituated at once to violence and to slavery, renders them very unfit guardians of a legal, or even a civil constitution.”


“The influence of the clergy, in an age of superstition, might be usefully employed to assert the rights of mankind; but so intimate is the connection between the throne and the altar, that the banner of the church has very seldom been seen on the side of the people.”

It is this very contemporary view of the late Roman world and what succeeded it that produced the profound anti-clericalism of the work that landed Gibbon in such hot water at the time. Anti-clericalism was very fashionable among the gentlemen of the Enlightenment, and there was a corresponding tendency to rather despise the superstitious, priest-ridden world of Medieval Europe. While this rejection of an entire millenium of European history strikes us now as an unwarranted and unscholarly value judgement (and in particular Gibbon’s rejection of the Byzantine Empire as an unworthy shadow of the ‘true’ Roman Empire had some baleful consequences for historiography), it can hardly be denied that the most enjoyable passages from the book are those that excoriate the early Christian Church. Gibbon took huge pleasure in lampooning the figures so revered in Catholic hagiography, and it is not for nothing that the last pagan emperor, Julian the Apostate, is such a central figure of the early volumes. He contrasts the easy going religious inclusiveness of the pre-Christian Romans, where gods from a number of different cultures jostle amicably in the pantheon (and where “The various modes of worship…were all considered by the people as equally true; by the philosopher as equally false; and by the magistrate as equally useful.”) with the narrow and intolerant zeal of the Christian martyrs and the triumph of a religion whose votaries “have inflicted far greater severities on each other than they have experienced from the zeal of infidels”.

Gibbon’s slaughtering of the sacred cows of Catholicism, his disdain for despotic Popes and fanatical priests, his skepticism regarding early miracles and his forthright opinion that the numbers of Christian martys had been exaggerated by the early Church for propaganda purposes, all caused huge controversy when the offending chapters were first published. The book was banned in several countries and was included in the Catholic Church’s Index Librorum Prohibitorum right up until that anachronistic list was allowed to expire in 1966. Even in England Gibbon was attacked as a pagan, and felt compelled to publish his Vindication some years after the chapters were first published.

Gibbon is for several reasons regarded as the first modern historian. Most importantly, he was the first historian of modern times to go back to primary sources rather than rely on secondary ones. As he himself put it, “I have always endeavoured to draw from the fountain-head; that my curiosity, as well as a sense of duty, has always urged me to study the originals; and that, if they have sometimes eluded my search, I have carefully marked the secondary evidence, on whose faith a passage or a fact were reduced to depend.” The book also included voluminous footnotes, in some cases taking up more space on the page than the main text. These often provided a thoughtful and sometimes witty reflection on the text and frequently provided commentary on contemporary English society and values.

Gibbon’s work is also notable as the first substantial attempt to answer a question so baffling to moderns: why did the Roman Empire collapse? What caused the sudden downfall of a state that was arguably more powerful in the fourth century, just a century or so before the Germanic tribes divided up Western Europe, than it had been in the days of Augustus and his dynasty? Gibbon himself attributed the fall to the loss of civic virtue among the Romans over several centuries. They had, he thought, lost the severe virtues of the Republic, had become soft, effeminate, unmanly; they lost the will and the skill to fight for themselves and became increasingly dependent upon barbarian mercenaries who eventually seized the Empire for themselves. At the same time the rise of Christianity undermined the population’s commitment to things of this world, promising a better one to come; and also drained a good portion of the Empire’s manpower as crowds of young men became hermits and monks and ceased to contribute to the Empire’s wellbeing in any meaningful way.

Few historians would now agree with Gibbon’s assessment, and in the two centuries since his death many other theories have been expounded, both political, economic, demographic and epidemiologic. Yet they all build ultimately upon the work of one man. Gibbon set the bar high early on for historians of late antiquity. He was not always right; but his painstaking care for detail and accuracy, his attempts at objectivity and the astounding breadth of his research – I can’t imagine how many ancient documents he must have read, mostly in the original Latin or Greek in the absence of translations – place him firmly at the beginning of the modern historiographical tradition. While the elegance of his prose and his sharp wit make the book a delight to read in a way that the more academic works by his successors can never quite match.

Gibbon wrote of the conclusion of his work:

“It was on the day, or rather the night, of 27 June 1787, between the hours of eleven and twelve, that I wrote the last lines of the last page in a summer-house in my garden…I will not dissemble the first emotions of joy on the recovery of my freedom, and perhaps the establishment of my fame. But my pride was soon humbled, and a sober melancholy was spread over my mind by the idea that I had taken my everlasting leave of an old and agreeable companion, and that, whatsoever might be the future date of my history, the life of the historian must be short and precarious.”

By that time, though only 50, his health was failing. He had always had poor health since childhood, but by the 1780’s he was suffering from chronic gout and cirrhosis of the liver – a result of an overindulgence typical of his class – as well as a painful hernia that was ultimately disfiguring. He remained in good spirits; a typical smutty bon mot towards the end was “Why is a fat man like a Cornish Borough? Because he never sees his member.” Just a few days before his death he is said to have eaten a chicken wing and drunk three glasses of madeira while proclaiming that he was good for another ten, twelve or even twenty years. He died quite suddenly on January 16th 1794 at the age of 56.

His book remains a milestone both of historiography and English Literature.