Currency

 

Currency—money—provides a common unit of value that allows commerce to move beyond barter and enables financial markets to develop. The British colonies in North America inherited their currency from Europe, which had conducted transactions with gold and silver coins (specie) for thousands of years. Since the late medieval period, financial instruments (bills of exchange and banknotes) had supplemented specie. Issuers promised to convert their notes into specie on demand, but they never had enough gold and silver on hand to redeem all of their paper and counted on their financial assets—debts others owed them—to back their notes. From the start, the value of paper money depended chiefly on the creditworthiness of its issuer.

Conditions in the British colonies in North America forced major changes in this system. The colonies suffered chronic trade deficits that they covered, in part, by exporting specie. Accordingly, the supply of gold and silver was generally insufficient to finance even current business, much less the rapid expansion of the colonial economy. And the colonies did not have banks to provide notes or bills of exchange.

Colonists responded to the shortage of currency in three ways. They constantly extended credit to each other, so domestic trade more often involved the exchange of promissory notes rather than cash. Some colonies used commodities as money. From the seventeenth century, Virginia levied taxes and paid public officials’ salaries in tobacco, for which a ready market existed in Europe. Most notably, some colonial governments issued paper money, either to finance government deficits or through loan offices. Such issues contradicted the conventional wisdom, which ascribed paper money value only if it was convertible into specie. Nevertheless, the colonies’ paper money worked well in most cases. Governments usually issued only limited quantities of paper and provided for its redemption, accepting notes for taxes or the repayment of loans. This guaranteed a steady demand for paper money, which traded at only a modest discount to specie.

The American Revolution overwhelmed these expedients. The war with Britain severely reduced American exports and foreign trade, exacerbating both the payments deficit and the shortage of gold and silver. The Continental Congress could not levy taxes to defray military expenses and instead issued large quantities of paper money, whose value fell rapidly. Several states followed this example. By the 1783 Peace of Paris, which secured American independence, the nation was awash in worthless paper money.

The new federal Constitution, which went into effect in 1789, addressed this problem. It lodged authority over the currency with the central government and specifically banned state governments from issuing paper money. George Washington’s Treasury secretary, Alexander Hamilton, quickly asserted the federal government’s power. In 1791 he persuaded Congress to charter the Bank of the United States (BUS), which would have $10 million in capital, consisting of specie and federal bonds. The BUS would issue banknotes equal to its total capital that it would redeem on demand in specie. As would always be the case, the quantity of notes exceeded the specie in reserve. Hamilton also organized a mint to coin gold and silver, but the shortage of specie limited its output. Most of the coins in circulation were from abroad, and American coins would not become common for several decades.

Hamilton’s program was controversial. Thomas Jefferson and James Madison argued that the Constitution did not authorize a bank and that the operations of the BUS infringed on the legitimate rights of states. A strong popular prejudice existed against banking, which critics believed profited by manipulating credit rather than from honest labor. Finally, many Americans considered corporations, with their limited liability and special powers, synonymous with monopoly and privilege, which the Revolution had supposedly banished. Only the support of President Washington and the Federalist Party allowed Hamilton to secure congressional approval of the BUS’s charter.

Meanwhile, states were chartering their own banks. Like the BUS, these institutions could issue banknotes to borrowers that banks were supposed to redeem on demand in specie. At first, states generally chartered only one institution to provide a uniform local currency. Banks proved very profitable, however, and soon others demanded similar privileges for themselves. Although each bank charter still required a special legislative act, these institutions multiplied rapidly, and by the early 1800s, the country had dozens of banks, each of which issued its own notes. In theory, all were supposed to redeem their notes on demand in specie, but in practice, merchants were reluctant to accept the notes of distant banks about which they knew little. The BUS provided uniformity by purchasing state banknotes at close to par (face value) and redeeming them for either specie or its own notes. The practice was unpopular with state bankers, who at any time might find the BUS demanding a large portion of their specie. But it kept the value of the wide variety of notes in circulation fairly equal and forced state banks to maintain a conservative ratio between notes issued and specie in reserve.

The growth of banks contributed in another way to the development of currency. Most of these institutions took deposits and gave borrowers credit on their topics as well as banknotes. Those with bank credit could transfer funds by check. In cities such as Boston, New York, Philadelphia, and Baltimore, many transactions occurred without any cash changing hands—banks simply moved money from one account to another. Although little remarked at the time, bank accounts were money just as much as banknotes were. As early as 1800, the value of accounts may have equaled the notes in circulation, and the importance of accounts would increase throughout U.S. history. By 2000, cash made up a relatively small portion of the total supply of money in the country.

Congress refused to recharter the BUS when its initial authorization expired in 1811. Hamilton was dead by that time, and Thomas Jefferson’s Republicans were in power. Although twenty years of wise management had won over some opponents, among them James Madison, many of the bank’s critics remained unreconciled to it, and they could count on the support of certain state banks that were irritated by the limits the BUS imposed on their operations.

The War of 1812 led at least some opponents of the BUS to reevaluate their stance. The war thoroughly disrupted foreign trade, which was, among other things, the chief source of tax revenue. Heavy military outlays further strained the government’s credit, and throughout the war, Washington paid its bills slowly if at all. The dislocation of international trade and government finances badly hurt banks, and by 1814 most of them had ceased redeeming their notes in specie. In effect, the country now had as many currencies as it had banks, with the notes of each institution valued according to the institution’s reputation.

In 1816 the federal government created the Second Bank of the United States to remedy these problems. This bank was essentially a larger version of the First BUS, with $35 million in capital. Unfortunately, during the 1817-1818 boom, the new institution lent recklessly, and it suffered heavy losses in the 1819 depression. The Second BUS survived only by aggressively pressing its debtors for payment, driving many into bankruptcy and intensifying the economic hardship.

Nevertheless, by the mid-1820s, under the able leadership of Langdon Cheves and Nicholas Biddle, the BUS had man aged to create a uniform currency. Supported by the U.S. Treasury, it gradually forced state banks to resume redeeming their notes in specie, and it followed the example of the First BUS in purchasing state notes at close to par and systematically cashing them in for gold or silver. The BUS also issued its own notes, which traded throughout the country at par. The bank provided another critical service by moving money around the country in response to seasonal changes in the demand for it. The United States was an overwhelmingly agricultural country, and many farmers and planters paid their bills once a year, when they sold their harvest. This created a regular jump in the demand for currency that, unless neutralized, could disrupt financial markets. The BUS systematically expanded its credits in the West and South during the fall, financing the movement of crops to market, and then reduced credits as the harvest was sold and borrowers repaid their debts. Inevitably, some state bankers resented the BUS’s competition and the limits it placed on their ability to issue notes, but the business community as a whole seemed to have appreciated the benefits of a stable, uniform currency.

In the 1830s President Andrew Jackson struck a blow at federal control over the currency, causing damages that would not be fully repaired for a century. In early 1832 he vetoed the bill renewing the charter of the BUS, and in 1833 he withdrew the government’s deposits from the institution, robbing it of its largest source of funds. Opposition to the bank became the central issue around which the new Democratic Party coalesced. In 1836 the BUS ceased to exist when its charter expired. A variety of motives guided action in this regard. Some ambitious businesspeople opposed the limits the BUS imposed on their operations, as suggested earlier. This was particularly true of many New York bankers, who resented the power of the Philadelphia-based BUS. Further, many farmers and planters were suspicious of banking in general, seeing it as an essentially dishonest calling. Most telling, however, was the charge that the BUS was a corrupt aggregation of political and economic power resting on an exclusive government charter that was incompatible with political democracy. The bank’s incompetent attempts to defeat Jackson in the 1832 presidential election reinforced this concern.

The demise of the BUS forced the nation to find other ways to regulate its currency. A few individuals, including Jackson himself at times, hoped to limit all transactions to specie, but the country did not have enough gold and silver for this. It needed banknotes. After a period of financial confusion, including two crises in 1837 and 1839 during which most banks stopped converting their notes into specie, a workable—if somewhat ramshackle—system emerged.

After the mid-1830s, states regulated banks and their notes. Policy varied considerably from state to state. Several states to the west and south (Indiana, Missouri, Mississippi) banned banking corporations altogether or chartered only one state-owned institution. Others, such as Louisiana and Massachusetts, strictly oversaw banks to guarantee that they redeemed their notes in specie and, in general, conducted business in a sound fashion. New York devised the most important innovation: free banking. The Empire State would automatically grant a banking charter to anyone who had enough capital in bonds, allowing the individual to issue notes equal to the value of these bonds. This move legitimized banking by democratizing it, allowing anyone who met objective criteria to organize a bank and issue currency. Free banking also ended the need for the state legislature to authorize every banking charter, a process that was always contentious and often corrupt. By 1860 several other states had adopted free banking, though it was hardly universal.

The federal government’s Independent Treasury provided a practical brake on the issuance of notes by state banks. Authorized in 1840 and reauthorized in 1846, the Independent Treasury operated as Washington’s financial agent, accepting tax receipts and making payments. It did business solely in specie. Consequently, taxpayers and buyers of public lands needed gold or silver, which they usually obtained by redeeming banknotes for specie. Such redemptions were not as systematic as those of the old BUS, but they did encourage banks to maintain a conservative ratio between notes issued and specie held in reserve.

Although the new system worked, it was not as efficient as the BUS. It had no mechanism to accommodate seasonal shifts in the demand for money and no device to keep banknotes at par. Indeed, discounting the hundreds of types of notes that circulated in the United States became a significant part of most banks’ business. The new system might not have worked at all had not the discovery of gold in California in the late 1840s injected a great deal of specie into the economy, partially compensating for the system’s inflexibility.

California gold had another important implication for the currency. Although gold and silver had served as money throughout most of history, in practice people used whichever was more plentiful for transactions and hoarded the other. During the early Republic, specie was largely silver. But the role of gold had been growing for several decades, and the influx from California largely drove silver from circulation. In the 1850s the United States had a de facto gold standard, with the value of the dollar fixed at $20.67 to an ounce of gold. (The gold standard uses gold as the standard value for a nation’s currency. Since 1971, when the United States left the gold standard, no country in the world has operated under this system. Instead, currencies are based on a floating rate set by market forces.)

The Civil War affected the currency as dramatically as it did most other aspects of American life. The military effort entailed unprecedented spending (several billion dollars), and to pay its bills, the federal government had to abandon specie and issue $450 million worth of paper money known as “greenbacks.” Greenbacks were a “fiat” currency that the government made legal tender for payment of debts. (A fiat currency is a worthless paper money that gains its value from confidence in the government’s ability to meet its obligations.) The greenbacks were not convertible into specie, and many people feared that they would become worthless, as had paper money issued during the Revolution. But Washington also imposed heavy taxes and devised an extensive system of borrowing to pay most of its military expenses. The quantity of greenbacks was limited, and Washington created a demand for them by accepting them for federal bonds and most taxes. Accordingly, although greenbacks did depreciate against gold, bottoming out in 1864 at two and a half greenbacks to one gold dollar, they remained a viable currency. Gold still played a role, however. Importers had to pay tariffs in the precious metal, and the holders of federal bonds received their interest in gold. Moreover, merchants conducted foreign trade in gold or sterling (Britain was on the gold standard, so its money was “as good as gold”). During and immediately after the Civil War, the United States actually had two currencies: gold and greenbacks.

Other reforms more than compensated for the confusion wrought by this two-tiered system. In 1863 Congress enacted the National Bank Act, which created a universal system of free banking. Anyone with enough capital, in the form of federal bonds, could receive a banking charter and the right to issue notes equal to the face value of these bonds. Banks deposited their bonds with the Treasury and promised to redeem notes on demand with greenbacks. Washington would regularly audit national banks to guarantee that they were sound. When state banks proved reluctant to convert to federal charters, the government imposed a prohibitive tax on their notes, forcing these institutions either to become federal banks or to stop issuing notes and become banks of deposit. However, the new system had weaknesses. The supply of money depended on the supply of federal bonds, not economic conditions. The financial system could not adjust to seasonal shifts in the demand for money. And there was no mechanism to regulate deposits, which by 1867 were twice as great as the supply of paper money. Nevertheless, Civil War-era banking reforms asserted federal control over the currency and, because greenbacks and national banknotes circulated interchangeably, gave the country its first genuinely uniform money.

With the end of the Civil War in 1865, most people expected the country to return swiftly to the gold standard. In fact, the process took fourteen years and generated immense controversy. During the last third of the nineteenth century, prices fell steadily, in the United States and across the world. The decline did not impair American economic growth, but it did impose punishing burdens on debtors, who had to repay loans in ever-more-valuable dollars. Debtors were naturally skeptical of returning to the gold standard, which would entail increasing the value of greenbacks to that of gold dollars—that is, more deflation (the devaluing of currency). The pressures for resumption were also strong, however. Many considered precious metals the only honest basis of currency. More important, during the 1870s, most Western European countries adopted the gold standard, which, by linking all currencies to gold, fixed their value in terms of each other, greatly facilitating international trade and investment. The United States conducted most of its foreign trade with these countries and relied on them for critical investment, and making the dollar “as good as gold” would strengthen these important relationships. After a long political debate, the United States returned to the gold standard in 1879, making greenbacks freely convertible into gold at the rate of $20.67 an ounce.

The return to the gold standard changed the currency in several important ways. Under that standard, the supply of money ultimately depended not on the quantity of federal bonds or greenbacks but on the country’s gold reserve. This reserve, in turn, depended chiefly on the international balance of payments because countries paid their deficits in gold. If the United States ran a surplus, gold flowed in and the money supply expanded. A deficit drained gold and contracted the supply of money. The U.S. Treasury, which was responsible for redeeming greenbacks in the precious metal, held most of the country’s gold reserve—a sharp contrast with the situation before 1861, when each bank held specie to cover its own notes.

Advocates of inflation did not give up after 1879 but instead turned their attention to silver. In 1873, Congress had demonetized silver, which, because of plentiful gold supplies, had not actually circulated for decades. Although presented at the time as a rationalization measure to eliminate a type of money that no one used, the initiative was intended to serve more significant objectives. The other industrial countries were also abandoning silver for gold, and the United States sought to align its currency with those of its chief trading partners. Moreover, new discoveries of silver promised to vastly increase its supply; thus, if silver remained legal money, it would eventually replace less-plentiful gold. This outcome would greatly expand the money supply and might well unleash inflation.

For these reasons, those who were hurt by falling prices began to call for “free silver”—the unlimited coinage of silver at the rate of 16 ounces of silver to 1 ounce of gold. Because the market price of silver was roughly one-thirtieth that of gold, this would effectively put the country on a silver standard and devalue the dollar, expand the money supply, and push prices upward. In the 1880s Congress sought to appease silver interests by issuing fixed amounts of silver coins and silver certificates (notes backed by silver). Their limited quantity allowed the United States to maintain their value against gold. But the severe depression from 1893 to 1897 increased the pressure for more currency and higher prices even as it created federal budget and national trade deficits that drained the country’s gold reserve. To limit the quantity of notes eligible for redemption, protect the reserve, and maintain the gold standard, Congress ended all silver coinage, a move that infuriated silverites (individuals who wanted to use silver as legal tender). In 1896 the Democrats nominated William Jennings Bryan for the presidency on a platform of free silver. The Republican candidate, William McKinley, took up the challenge, warning that an unlimited coinage of silver would drive gold from circulation, devalue the dollar against European currencies, and create financial chaos. The Republicans won a crushing victory, guaranteeing gold’s central role in the currency for the next generation.

After 1900 debate on the currency shifted from its metallic basis to the structure of the banking system. The discovery of gold in Alaska and South Africa and the development of new techniques for refining it greatly increased the supply of the precious metal and inaugurated a period of mild but steady inflation worldwide, defusing pressures for silver currency and greenbacks. Moreover, the public increasingly recognized that most of the nation’s money was in bank accounts, not coins or notes, and that the banking system had serious weaknesses. No mechanism existed to accommodate seasonal shifts in the demand for money, which were often severe during harvest time. In addition, reserves were scattered, so it was hard to mobilize money during a financial crisis. The inability to mobilize money meant that if depositors lost confidence in a bank and demanded cash for their de-posits—that is, if they started a run—the bank might well fail even if its assets exceeded its liabilities. A severe financial panic in 1907 highlighted the need for reform.

The Federal Reserve Act, passed by Congress in 1913, altered the currency almost as drastically as Civil War-era reforms had. It established a dozen regional reserve banks in which all national banks and most leading state banks would hold stock. These Federal Reserve banks would give banks within their regions currency or credit in exchange for “real bills” (short-term commercial loans secured by goods), federal obligations (bonds), or gold. Commercial banks would keep their reserves on deposit with the reserve banks, which, in a crisis, could advance funds to any institution in trouble. The Federal Reserve banks would issue their own notes, gradually replacing the motley collection of greenbacks, notes from national banks, and silver certificates in circulation. In the long run, the supply of money would still depend on the supply of gold, but reserve banks could cope with seasonal shifts in the demand for currency by purchasing (rediscount-ing) real bills from member banks to finance the movement of goods. The repayment of these loans would withdraw money from circulation once it was no longer needed. A central board, appointed by the president and headquartered in Washington, would oversee the new Federal Reserve system (commonly referred to as “the Fed”). Bankers themselves largely authored these reforms, which were designed to reinforce the financial system, not remake it. But progressive reformers such as Bryan and the lawyer Louis Brandeis were able to insist that the politically appointed board in Washington have ultimate responsibility over the system.

World War I further changed the American and, indeed, the world monetary systems. The combatants abandoned the gold standard, and precious metal gravitated to the United States as the Allies used gold to pay for military supplies, greatly increasing both the supply of money and prices in the United States. After the country itself entered the conflict in 1917, Washington temporarily banned the export of gold, effectively suspending the gold standard. (Gold continued to circulate domestically.) To finance the country’s military effort, the Federal Reserve purchased large quantities of federal bonds with its notes, further expanding the money supply and pushing prices upward. Overall, prices in the United States more than doubled between 1914 and 1920. The architects of the Federal Reserve had assumed that the gold standard would continue to govern international monetary relations and that real bills would constitute the majority of the Fed’s assets. The war undermined both assumptions, forcing Fed officials to rethink monetary policy.

In the 1920s the United States and leading European powers sought to re-create the monetary stability of the prewar era. The United States ended the embargo on gold exports in 1919, and a sharp recession in 1920 and 1921—a result, in part, of Fed efforts to halt inflation by raising interest rates— reversed some of the wartime rise in prices. But the other industrial nations only gradually followed the American example. They had suffered more inflation than the United States and had lost much of their gold reserves. Britain, the most important of these nations, returned to the gold standard only in 1925. Even after that year, the dollar had a special place in the international system. The United States had the world’s strongest economy, and it consistently ran a surplus on its balance of payments (a statement that summarizes economic and financial transactions between banks, companies, private households, and public authorities in comparison with those of other nations on an annual basis). Dollars were at a premium, and some countries covered balance-of-payment deficits by transferring dollars rather than gold. The dollar had partially replaced the precious metal in international finance. This freed the United States from the day-today limits the gold standard imposed on monetary policy and forced the Federal Reserve to devise new criteria for action. The central bank, working through the embryonic Open Market Committee (OMC), managed policy by trading federal securities in the open market. Purchases injected money into the financial system; sales sucked it out. But open market operations represented a tool, not a plan. In practice, Fed policy followed no hard-and-fast rule but the judgment of its leaders, who manipulated interest rates and the money supply in ways that they hoped would promote economic growth and financial stability.

Their judgment proved unequal to the Great Depression. The stock market crash in the United States and comparable disasters in Europe deranged financial markets and set off a cascade of bankruptcies. Unsure how to respond and internally divided, the Fed vacillated between paralysis and adherence to the verities of the gold standard. In 1931 it raised interest rates to curtail gold exports, a move that may well have choked off a recovery. The supply of money contracted by a third between 1929 and 1933, hurting every type of business and forcing prices and production down sharply.

The disaster forced further changes in the currency. After taking office in 1933, President Franklin D. Roosevelt gradually devalued the dollar from $20.67 to an ounce of gold to $35, and his administration banned domestic ownership of gold entirely. Gold coins disappeared from circulation, replaced by paper. Though the precious metal continued, in theory, to back the currency, the link was tenuous. Gold mattered only for international transactions, and the Roosevelt administration had overvalued the precious metal, so foreigners were eager to sell it to the United States at $35 an ounce. In practice, the dollar was a fiat currency, worth what it could buy in the marketplace. The federal government also insured deposits with commercial banks, largely eliminating the danger that bank runs could seriously damage financial markets. Finally, in 1935, Congress reformed the Federal Reserve system, centralizing authority in the Federal Reserve Board in Washington and giving the Open Market Committee formal authority over monetary policy.

During World War II the Federal Reserve financed the American military effort by purchasing large quantities of federal bonds. This policy increased the money supply and drove prices up 50 percent between 1939 and 1948, but the increase was less than that during World War I because the federal government levied stiff taxes to pay for the war. The main wartime innovations in economics involved international finance. Most economists and government officials believed that in the 1930s, the dislocation of international finance—devaluation, payments crises, and currency controls—had contributed substantially to the Great Depression. Accordingly, the Allies devised a plan to rebuild the international monetary system once the war was over. They sought stable exchange rates and readily convertible currencies but did not want to tie their money to the supply of gold—that is, they wanted the advantages of the gold standard without its disadvantages. To this end, the Allies adopted a system of “pegs,” fixing the value of their currencies in terms of dollars, which were “as good as gold,” and settling deficits and surpluses with the American currency. International agencies, most notably the International Monetary Fund (IMF), would finance countries with deficits, and governments in dire circumstances could regulate the flow of money across their borders. Other governments that accumulated dollars could convert them into gold at $35 an ounce.

This system worked fairly well for 20 years. The United States ran trade surpluses that kept the dollar strong, and American foreign aid and investment allowed other countries to pay for imports and amass dollar reserves large enough to expand their own currencies in line with production. As a practical matter, dollars served the role that gold once had.

Domestic policy was less consistent. After 1945, the Fed kept the interest rates on government bonds low, purchasing them itself if private buyers would not. Although popular with the Treasury, this policy forced the Federal Reserve to expand the money supply rapidly if either the demand for credit or the government deficit rose sharply, fueling inflation. That is exactly what happened after the outbreak of the Korean War in 1950. After long negotiations with the Treasury, the Fed changed its policy emphasis in 1951: Henceforth, it would set interest rates and supply currency, first and foremost, to secure high employment and stable prices. The international balance of payments and government finances remained a significant but secondary consideration.

Between 1968 and 1973, a series of crises destroyed the international system. Rising prices in the United States (a side effect of heavy military and social spending, financed in part by currency expansion) as well as the growing efficiency of foreign competitors (chiefly Japan and Germany) created large payments deficits that Americans paid with dollars. Other countries accumulated stocks of the U.S. currency vastly greater than America’s gold reserves. The United States could have raised interest rates and cut government spending to force prices down and eliminate the payments deficit, but no political support existed for this course, which would have entailed lower growth and employment rates, at least for a while. Further, the United States could not simply devalue its currency because the dollar was the centerpiece of the entire financial system. In 1973, after a series of increasingly severe crises, the industrial democracies ended all pegs and allowed their currencies to float, or find their value in trading in financial markets. Washington formally severed the last link between the dollar and gold, ceasing to value its currency against the precious metal. After 1973 the United States had a fiat currency, worth only what it could buy in the marketplace. In 1975 Americans gained the right to own gold, whose price would fluctuate like that of other commodities.

The dollar fared badly in the decade after 1973, during which consumer prices increased 130 percent—the most rapid rise in the country’s peacetime history. Many factors conspired to push prices up, but ultimately, the problem reflected a lack of political will. The Federal Reserve could contain prices by raising interest rates and slowing the growth of the money supply, but in the short run, this approach would create a recession, which political leaders refused to tolerate.

Eventually, the pain of inflation eroded the resistance to strong measures. Starting in 1979, the Federal Reserve, under Chair Paul Volcker, embarked on a decisive campaign to tame inflation, raising interest rates to historical highs and strictly limiting expansion of the currency. These moves triggered a severe recession, but after 1982 inflation slowed dramatically and growth resumed. The experience vindicated Volcker and the Fed, which subsequently enjoyed much greater leeway in pursuing decisive measures to defend the currency’s buying power. Although in its mechanisms quite different from the gold standard, this policy had the same objective: establishing a stable currency.

Alan Greenspan, Volcker’s successor, has chaired the Federal Reserve Board since 1987. He continued to focus on monetary policies designed to fight inflation. Between the terrorist attacks of September 11, 2001, and June 2003, the Federal Reserve cut interest rates thirteen times in an effort to stimulate an economy that had been in a recession since March 2001. By late June 2003, Greenspan reported positive indications that the economy was improving but warned of some weaknesses that persisted.

Next post:

Previous post: