Review of “The Last Full Measure: How Soldiers Die in Battle” by Michael Stephenson

In The Face of Battle, John Keegan described three famous battles — Agincourt (1415), Waterloo (1815), and the Somme (1916). His focus was not on the historical setting or significance of the battles, but rather on the overall experience of individual soldiers in those battles. That seminal book inspired other authors, notably Victor Davis Hanson and Paul Fussell, to write about the messiness of battle at other times in other wars. Michael Stephenson takes up similar themes in The Last Full Measure, in which he writes about the causes and methods of killing soldiers and how specific battle tactics contribute to or reduce the likelihood of those deaths. He spends less time than Keegan on the general discomfort of infantry life and the confusion of battle, focusing on the specific effects of various kinds of weapons on their targets. This makes for riveting reading while providing a strong disincentive to engaging personally in battle.

The historical sweep of Stephenson’s narrative is vast, covering warfare from the Bronze Age (the siege of Troy) up to our wars in Iraq and Afghanistan. Most of the book, however, is devoted to the American Civil War and the two World Wars.

The old adage that generals begin any war by using the tactics of the last war has been true more often than not. The result is that combat soldiers tend to suffer most from the failure of their leaders to adapt to battle technology that developed between wars.

Stephenson uses statistics to illustrate general trends, but avoids being dull. For example, he shows that although army tacticians and training from the American Revolution to World War I emphasized bayonet fighting, medical records show that very few casualties were caused by sharp-edged weapons. The vast majority of casualties in the Revolution and the Civil War were caused by muskets. As tactics changed and soldiers learned to use cover (such as trenches and fox holes) more effectively, the role of artillery became dominant.

His final chapter is devoted to the improvements in battlefield medical care. Ancient warriors had virtually no access to medical care, and nearly everyone with a significant wound died from it. Roman armies were organized to care for the injured and even though they had no theory of germs and infection, they actually had some sense of the need to clean wounds. Armies in the Middle Ages seem to have forgotten lessons learned by the Romans, and seldom were able to provide care on the battlefield. Up until the American Civil War, armies were nearly always woefully under-prepared for inevitable carnage. Medical care improved substantially over the course of the war, at least in the North. (One of the medical breakthroughs included the use of chloroform as anesthesia.) However, a lack of understanding of the causes of sepsis still left wounded soldiers in a sorry state even when they were delivered to a field hospital. The modern American army has made enormous progress in medical treatment. By the time of the Vietnam War, the average time between injury and field hospital treatment was reduced to 2 hours, and the array of treatment available was vastly better than even 30 years earlier.

Evaluation: Stephenson’s writing is crisp and effective. He tempers the intensity of his accounts with poignant quotes from letters or other writings of battle participants. A mild criticism I have of the book is that it gives rather short shrift to American’s two wars in Iraq and Afghanistan. Otherwise, however, this is a fine addition to the literature of combat in the tradition of John Keegan.

Rating: 4/5

Published by Crown Publishers, an imprint of the Crown Publishing Group, a division of Random House, Inc., 2012


April 19, 1943 – Anniversary of the Warsaw Ghetto Uprising

Under the cover of World War II, Nazi Germany began a genocidal program to deal with “the Jewish problem.” As a first step, the Nazis herded Jews into small ghettos where starvation and disease could take their toll, thus lessening the workload for the extermination camps. On Yom Kippur, October 12, 1940, the Nazis announced the building of Jewish residential quarters in Warsaw. Close to 400,000 Jews (30% of the Warsaw population) were forced to occupy an area consisting of some ten streets (2.4% of the city’s area). (Warsaw’s pre-war Jewish population in 1939 was 393,950 Jews.) Jews were also deported into the ghetto from other places, and the population of the ghetto reached more than half a million people.

Beginning in the summer of 1942, the first mass deportations of Jews from the ghetto to the extermination camps began. The number of deportees averaged about 5,000-7,000 people daily, and reached a high of 13,000. At first, ghetto factory workers, Jewish police, Judenrat members, hospital workers and their families were spared, but they were also periodically subject to deportation. Only 35,000 were allowed to remain in the ghetto at one time.

Children in the Warsaw Ghetto

[Regarding the Judenrat, as the Jewish Virtual Library explains:

“As far back as 1933, Nazi policy makers had discussed establishing Jewish-led institutions to carry out anti-Jewish policies. . . . These councils of Jewish elders, (Judenrat; plural: Judenräte), were responsible for organizing the orderly deportation to the death camps, for detailing the number and occupations of the Jews in the ghettos, for distributing food and medical supplies, and for communicating the orders of the ghetto Nazi masters. . . . As ghetto life settled into a ‘routine,’ the Judenrat took on the functions of local government, providing police and fire protection, postal services, sanitation, transportation, food and fuel distribution, and housing, for example.”]

Jewish residents of the ghetto shopping in a vegetable street market.

A second wave of deportations to the Treblinka extermination camp began on January 18, 1943, during which many factory workers and hospital personnel were taken. Unexpected Jewish armed resistance, however, forced the Nazis to retreat from the ghetto after four days of deportations.

Jews who were concentrated in the Warsaw Ghetto knew that their last remnants were slated for evacuation and death on Hitler’s birthday, April 20, 1943. Thus, on April 19, 1943, some 750 Jews – ragged, starving and barely armed – began firing at Hitler’s soldiers with smuggled guns, Molotov cocktails, and hand grenades. On the fifth day of battle, they issued a proclamation to the Polish population outside the ghetto walls:

“Let it be known that every threshold in the ghetto has been and will continue to be a fortress, that we may all perish in this struggle, but we will not surrender.”

They did not inflict more than a few hundred German casualties, but diverted over 2,000 German troops for some six weeks, and inspired many other Jews to acts of resistance.

On May 8, 1943, the Germans discovered their main command post, located at Miła 18 Street. (From thence comes the name of Leon Uris’s novel about the uprising, Mila 18.) Most of the leadership and dozens of remaining fighters were killed, while others committed mass suicide by ingesting cyanide. The suppression of the uprising officially ended on May 16, 1943. Approximately 13,000 Jews were killed in the ghetto during the uprising. Of the remaining 50,000 residents, most were captured and shipped to concentration and extermination camps, in particular to Treblinka.

Hirsh Glick (1920-1944), a poet and partisan in the Vilna Ghetto, wrote the Partisan Hymn when he heard about the Warsaw Ghetto Uprising. It became the battle hymn of the underground Jewish resistance movement. It was written in Yiddish, and is widely known by its Yiddish title, “Zog Nit Keyn Mol!” An English translation is shown below.

Never say that you are going your last way,
Though lead-filled skies above blot out the blue of day.
The hour for which we long will certainly appear.
The earth shall thunder ‘neath our tread that we are here!

From lands of green palm trees to lands all white with snow,
We are coming with our pain and with our woe,
And where’er a spurt of our blood did drop,
Our courage will again sprout from that spot.

For us the morning sun will radiate the day,
And the enemy and past will fade away,
But should the dawn delay or sunrise wait too long.
Then let all future generations sing this song.

This song was written with our blood and not with lead,
This is no song of free birds flying overhead,
But a people amid crumbling walls did stand,
They stood and sang this song with rifles held in hand.

(Translated by Elliot Palevsky)

You can see additional rare photos of Warsaw Ghetto life here.

April 16, 1779 – Casimir Pulaski arrived in Williamsburg, Virginia

Casimir Pulaski, born in Warsaw in 1745, was a Polish nobleman, soldier and military commander who has been called, together with his Hungarian friend Michael Kovats de Fabriczy, “the father of the American cavalry.”

Pulaski was one of the leading military commanders fighting against Russian domination of Poland; when the uprising failed, he was driven into exile. As the Polish American Center recounts:

. . . .he traveled to Paris where he met Benjamin Franklin, who induced him to support the colonies against England in the American Revolution. Pulaski, impressed with the ideals of a new nation struggling to be free, volunteered his services. Franklin wrote to George Washington describing the young Pole as ‘an officer renowned throughout Europe for the courage and bravery he displayed in defense of his country’s freedom.’”

Casimir Pulaski

He proceeded to distinguish himself throughout the American Revolution, and became a general in the Continental Army. In 1778, through George Washington’s intervention, Congress approved the establishment of the Cavalry and put Pulaski at its head. Pulaski trained his men in the cavalry tactics he had learned fighting in Poland, often using his own finances for equipment.

The Colonial Williamsburg Foundation reported that on this day in history, “Count Pulaski, with his retinue, etc.” arrived on their way south.

In December 1778, the British captured the City of Savannah. Washington sent Pulaski and his cavalry unit, known as the Pulaski Legion, to help liberate Savannah from British occupation. Joining the Continental Southern Army in 1779, the Pulaski Legion traveled with General Benjamin Lincoln to retake Savannah. During a cavalry charge on October 9, 1779, Pulaski, only 34, was mortally wounded by grapeshot.

(The grapeshot that killed Casimir Pulaski, mounted on silver candlestick, can be seen at the Georgia Historical Society. It is engraved “Grapeshot which mortally wounded Count Casimir Pulaski, Oct. 9, 1779, extracted from his body by Dr. James Lynah, ancestor of the present owner, James Lynah, Esq.”)

The United States has long commemorated Pulaski’s contributions to the American Revolutionary War. In 1929, Congress passed a resolution recognizing October 11 of each year as “General Pulaski Memorial Day,” with a large parade held annually on Fifth Avenue in New York City. Separately, a Casimir Pulaski Day is celebrated in Chicago and some other cities with large Polish populations on the first Monday of each March.

Congress passed a joint resolution conferring honorary U.S. citizenship on Pulaski in 2009, and President Obama signed it on November 6, 2009, making Pulaski the seventh person so honored. (One additional person was thusly honored in 2014, making the total eight. You can see who they are here.)

Note: New evidence from studies of Pulaski’s remains suggests the “father” of the American cavalry may in fact have been biologically female. You can read more about it here.

April 14, 1865 – Assassination of Abraham Lincoln

On Tuesday, April 11, 1865, Lincoln related a recent dream to Mary and a few friends. In his dream, he heard a number of people weeping, and he wandered through the White House to find out what was going on. He got to the East Room and there met with a sobering surprise. Before him was a catafalque, on which rested a corpse wrapped in funeral vestments. He asked nearby soldiers who had died in the White House. “The president” was the answer. The soldier said “He was killed by an assassin!” At that point Lincoln awoke, and could not get back to sleep.

Three days later, Mary Lincoln arranged a theater outing. Fourteen persons turned down the Lincolns’ invitation to join them on the fateful night of April 14, 1865. Excuses ranged from prior engagements to sudden illness. General Grant and his wife Julia had been invited, but Julia reportedly said she refused to sit in a theater box with “that crazy woman,” meaning Mary Lincoln. Even the president’s son Robert declined; he had just returned from Appomattox Court House, where he was present when Lee surrendered to Grant, and he wanted to sleep. The only two persons who accepted the Lincolns’ offer were Maj. Henry R. Rathbone and his fiancee, Clara Harris, the daughter of New York Sen. Ira Harris.

Ford's Theater

Ford's Theater

Shortly after 10:00 p.m. on April 14, 1865, actor John Wilkes Booth entered the presidential box at Ford’s Theatre in Washington, D.C., and fatally shot President Abraham Lincoln. After Booth shot Lincoln, Rathbone struggled with Booth and sustained serious wounds in his neck and head. (He recovered, but eventually went insane.) As Lincoln slumped forward in his seat, Booth leapt onto the stage and escaped out the back door. The paralyzed president was immediately examined by a doctor in the audience and then carried across the street to Petersen’s Boarding House where he died early the next morning.

Lincoln’s assassination was the first presidential assassination in U.S. history.

April 12, 1937 – The Supreme Court Decides NLRB v. Jones & Laughlin Steel

In NLRB v Jones & Laughlin Steel Corp, 301 U.S. 1 (1937), the U.S. Supreme Court upheld the National Labor Relations Act of 1935, commonly referred to as the Wagner Act. Jones & Laughlin Steel Co. was at that time the country’s fourth largest steel producer. The J. & L. dispute involved ten steelworkers who had been fired from one of the company’s mills for trying to organize a union.

The question before the Court was whether labor-management disputes were directly related to the flow of interstate commerce and so could be regulated by the national government.

Congress claimed authority to pass the Wagner Act under its power to regulate interstate commerce, enumerated in Article I of the Constitution. Jones & Laughlin challenged the law, arguing that the Act was an attempt to regulate all industry, “thus invading the reserved powers of the States over their local concerns.” This went beyond the commerce power of Congress, they asserted. As Chief Justice Charles Evans Hughes wrote about the position of Jones & Laughlin, the company argued “the Act is not a true regulation of such commerce or of matters which directly affect it, but, on the contrary, has the fundamental object of placing under the compulsory supervision of the federal government all industrial labor relations within the nation.”

Charles Evans Hughes, Chief Justice of the U.S. Supreme Court

In his opinion, Justice Hughes observed first that “[t]he distinction between what is national and what is local in the activities of commerce is vital to the maintenance of our federal form of government.” The Court held that “[a]lthough activities may be intrastate in character when separately considered, if they have such a close and substantial relation to interstate commerce that their control is essential, or appropriate, to protect that commerce from burdens and obstructions, Congress has the power to exercise that control.”

In the instant case, the Court noted that “[t]he relation to interstate commerce of the manufacturing enterprise . . . was such that a stoppage of its operations by industrial strife would have an immediate, direct and paralyzing effect upon interstate commerce. Therefore, Congress had constitutional authority, for the protection of interstate commerce, to safeguard the right of the employees in the manufacturing plant to self-organization and free choice of their representatives for collective bargaining.”

Specifically, The National Labor Relations Act of July 5, 1935 empowered the National Labor Relations Board to prevent any person from engaging in unfair labor practices “affecting commerce.” According to Sec. 7. [§ 157]:

Employees shall have the right to self-organization, to form, join, or assist labor organizations, to bargain collectively through representatives of their own choosing, and to engage in other concerted activities for the purpose of collective bargaining or other mutual aid or protection, and shall also have the right to refrain from any or all of such activities except to the extent that such right may be affected by an agreement requiring membership in a labor organization as a condition of employment as authorized in section 8(a)(3) [section 158(a)(3) of this title].”

Thus the Court held in part that “The Act imposes upon the employer the duty of conferring and negotiating with the authorized representatives of the employees for the purpose of settling a labor dispute. . . . .”

Moreover, it found that “The provision of the National Labor Relations Act, § 10(c), authorizing the Board to require the reinstatement of employees found to have been discharged because of their union activity or for the purpose of discouraging membership in the union, is valid.”

The Oyez website points out that Justice Hughes carefully limited the opinion to exclude situations in which an activity had such an inconsequential or remote impact on interstate commerce that it exclusively impacted local matters. 

In his dissent, however, Justice James C. McReynolds cited the lack of actual demonstrated effect on interstate commerce and questioned Congress’s enhanced power under the Commerce Clause. 

Chris Schmidt, writing for the Chicago-Kent College of Law SCOTUS blog, maintains:

The decision was a landmark ruling on the meaning of the Commerce Clause. Its reasoning granted far more authority to Congress to regulate economic relations than the Court had previously allowed. It was also a major victory for industrial and factory workers across the country. The Wagner Act helped usher in a new era of labor relations, one in which union power, backed by the authority of the federal government, entered into negotiations with industry on far more equal footing than before.”

But unions have been taking blows from other directions, most recently with the Supreme Court decision on June 27, 2018 in the case Janus v. AFSCME (No. 16-1466). By a 5-to-4 vote, with the more conservative justices in the majority, the court ruled that government workers who choose not to join unions may not be required to help pay for collective bargaining. The court overruled 41 years of precedent in deciding that requiring employees to pay fees violates their First Amendment rights.

April 8, 1913 – Seventeenth Amendment Wins Approval of Required Three-Fourths of State Legislatures

Before the passage of this amendment, U.S. senators were selected by state legislatures as directed by Article I, Section 3 of the Constitution, which holds that “The Senate of the United States shall be composed of two Senators from each state, chosen by the legislature thereof, for six years; and each Senator shall have one vote.”

A number of problems arose from this method, including political wrangles that led to seats going empty for long periods. Support gradually increased for the direct election of senators by the voters.

The first change came in 1866, when Congress passed a law regulating how and when senators would be elected in each state. But it did not solve the problem entirely. As the Senate Historical Office observes:

Intimidation and bribery marked some of the states’ selection of senators. Nine bribery cases were brought before the Senate between 1866 and 1906. In addition, forty-five deadlocks occurred in twenty states between 1891 and 1905, resulting in numerous delays in seating senators. In 1899, problems in electing a senator in Delaware were so acute that the state legislature did not send a senator to Washington for four years.”

Each year from 1893 to 1902, they report, a constitutional amendment to elect senators by popular vote was proposed in Congress, but the Senate fiercely resisted change.

In the early 1900s, states started to initiate changes on their own. Momentum for reform on the national level increased. William Randolph Hearst got into the game, championing the cause of direct election with muckraking articles and strong advocacy of reform. The Senate Historical Office relates what happened next:

Hearst hired a veteran reporter, David Graham Phillips, who wrote scathing pieces on senators, portraying them as pawns of industrialists and financiers. The pieces became a series titled ‘The Treason of the Senate,’ which appeared in several monthly issues of the magazine in 1906. These articles galvanized the public into maintaining pressure on the Senate for reform.”

By 1912, as many as twenty-nine states elected senators either as nominees of their party’s primary or in a general election. But a constitutional amendment was still required for a nationwide direct election process.

In 1911, Senator Joseph Bristow from Kansas offered a resolution proposing a constitutional amendment, and gained support from others who had come to the Senate via direct election. After the Senate passed the amendment, it went to the House, where it was finally approved in the summer of 1912 and sent to the states for ratification.

Connecticut’s approval gave the Seventeenth Amendment the required three-fourths majority, and it was added to the Constitution on this date in 1913. It was declared part of the Constitution, the 17th Amendment, on May 31 by Secretary of State William Jennings Bryan.

The following year marked the first time all senatorial elections were held by popular vote.

The Seventeenth Amendment restates the first paragraph of Article I, section 3 of the Constitution and provides for the election of senators by replacing the phrase “chosen by the Legislature thereof” with “elected by the people thereof.” In addition, it allows the governor or executive authority of each state, if authorized by that state’s legislature, to appoint a senator in the event of a vacancy, until a general election occurs.

April 7, 1994 – Beginning of Rwandan Genocide

The Rwandan genocide was a mass slaughter of Tutsi in Rwanda during the Rwandan Civil War. It was directed by members of the Hutu majority government during the 100-day period from April 7 to mid-July 1994. An estimated 500,000 to 1,000,000 Rwandans were killed, or an estimated 70% of the Tutsi population.

PBS explains that Hutus and Tutsis settled in the same area in Central Africa centuries ago and eventually shared a language, beliefs and customs. Economic differences between the groups began to form, however, with Tutsis as cattle-herders often in a position of economic dominance to the soil-tilling Hutus.

Belgian colonial rulers in the late 19th Century exacerbated the differences by forcing Hutus and Tutsis to carry ethnic identity cards. Furthermore, they only allowed Tutsis to attain higher education and hold positions of power. Many of the Hutus were made into forced laborers. The inequality and injustice helped create hatred between the tribes.

Following independence in 1962, Ruanda-Urundi split into two countries: Rwanda and Burundi. In Rwanda, the Hutus were in the majority, while in Burundi, the minority Tutsis maintained their control of the military and government through a campaign of violence against the Hutus.

But Rwanda was not peaceful either. The Hutu-dominated government led by Hutu President Juvénal Habyarimana since 1973 faced increasing dissatisfaction by the Tutsis, who were discriminated against by the Hutu majority.

Habyarimana’s Rwanda had become a single-party dictatorship. His party, the Mouvement Révolutionnaire National pour le Développement (MRND), was enshrined in the constitution. He relegated the Tutsi to the private sector. Regulations prohibited army members from marrying Tutsi. Habyarimana also maintained the ‘ethnic’ identity card and “ethnic” quota systems of the previous regime.

In addition, as the website E-International Relations (E-IR) reports:

Rwanda was faced with a critical food-people-land imbalance. In the years leading up to the genocide, there had been a marked decline of kilocalories per person per day and overall farm production. Famines occurred in the late 1980s and early 1990s in several parts of the country. Emergency sources of food in neighboring countries also were limited. . . . Rwandan youths faced a situation where many (perhaps most) had no land, no jobs, little education, and no hope for a future. Without a house and a source of livelihood, males could not marry. . . . The political elites [contended] that Hutu farmers could have sufficient land if the Tutsi were eliminated.”

Tension between the Hutu and Tutsi flared in 1990, when the Tutsi-led Rwandan Patriotic Front (RPF) rebels invaded from Uganda. A cease-fire was negotiated in early 1991, and negotiations between the RPF and the government began in 1992. An agreement between the government and the RPF was signed in August 1993 that called for the creation of a broad-based transition government that would include the RPF; Hutu extremists were strongly opposed to this plan.

On April 6, 1994, a plane carrying President Habyarimana was shot down over Kigali; the ensuing crash killed everyone on board. The identity of the person or group who fired upon the plane has never been conclusively determined, and both Hutu extremists and RPF leaders were suspected. But the result was the beginning of violence.

The next day Prime Minister Agathe Uwilingiyimana, a moderate Hutu, was assassinated. Her murder was, as the Britannica Encyclopedia contends, part of a campaign to eliminate moderate Hutu or Tutsi politicians, with the goal of creating a political vacuum and thus allowing for the formation of the interim government of Hutu extremists that was inaugurated on April 9.

Over the next several months the wave of anarchy and mass killings continued, in which the army and Hutu militia groups known as the Interahamwe (“Those Who Attack Together”) and Impuzamugambi (“Those Who Have the Same Goal”) played a central role.

Gangs of Hutus, sponsored by the government, murdered approximately 800,000 Tutsis (along with pro-peace Hutus labeled as traitors).

The country’s media was critical in inciting ethnic hatred and the desire for revenge. The government itself organized neighborhood militias to carry out the killings, even importing a half a million machetes for the use of the Hutu. In addition, youth and alcohol were contributing factors.

Tutsi Pastor Anastase Sabamungu (left) and Hutu teacher Joseph Nyamutera visit a Rwandan cemetery where 6,000 genocide victims are buried. (©2008 World Vision/photo by Jon Warren)

Rape was also used as a weapon in the attempt not only to punish and humiliate the Tutsis but to impregnate the women with Hutu children. Hutu women who were considered “moderates” were also subject to rape. To some extent the effort backfired, since some 70% of the assault victims were infected with HIV. (Estimates on the number of women raped ranged from 250,000 to half a million.) The rapes also caused a spike in HIV infections.

The West did very little to respond to pleas for help, except to remove their own white citizens and take them to safety. Unfortunately, Rwanda had no oil or strategic interest, no diamonds or gold.

In the U.S., the Secretary of State under President Clinton refused even to acknowledge that the systematic murder of the Tutsis constituted “genocide.” [In March 1998, on a visit to Rwanda, U.S. President Bill Clinton said: “We come here today partly in recognition of the fact that we in the United States and the world community did not do as much as we could have and should have done to try to limit what occurred” in Rwanda.” He later stated that the “biggest regret” of his presidency was not acting decisively to stop the Rwandan Genocide.]

The RPF ultimately prevailed, and in late July of the same year established a transitional government with Pasteur Bizimungu, a Hutu, as president and Paul Kagame, a Tutsi, as vice president. Kagame had commanded the rebel force that ended the 1994 Rwandan genocide. He now serves as President of Rwanda, having taken office in 2000 when his predecessor, Bizimungu, resigned.

Paul Kagame in August 2016

The genocide had a profound impact on Rwanda and its neighboring countries. The destruction of infrastructure and the severe depopulation of the country crippled the economy, as did the decimation of the population and family structures by HIV. The RPF military victory prompted many Hutus to flee to neighboring countries, particularly in the eastern portion of Zaire (now the Democratic Republic of Congo), where the Hutus began to regroup in refugee camps along the border with Rwanda. Declaring a need to avert further genocide, the RPF-led government led military incursions into Zaire, including the First (1996–97) and Second (1998–2003) Congo Wars. Large Rwandan Hutu and Tutsi populations continue to live as refugees throughout the region.

Rwanda Refugee Crisis Map via U.S. Holocaust memorial Museum Exhibit

Today, Rwanda commemorates the genocide with a national mourning period beginning on April 7 with Remembrance Day, and ending on July 4, Liberation Day.

The United Nations has also named April 7 as the Day of Remembrance of the Victims of the Rwanda Genocide to commemorate the people who were murdered during the 1994 genocide.

Those wanting a cinematic account of events are generally directed to “The Ghosts of Rwanda,” which is very educational but not easy to watch, rather than “Hotel Rwanda,” which paints a rosier picture than is warranted. In addition, the 2005 historical drama television film “Sometimes in April” depicts the attitudes and circumstances leading up to the genocide, and the struggle to adjust in the aftermath.