The Evolution of Computer Technology: A Look Back at IBM in 1956
The year is 1956. You’re a researcher working at International Business Machines (IBM), the world’s leading tabulating machine company, which has recently diversified into the brand-new field of electronic computers. Your latest task? Determine the precise purposes for which your customers are using these huge mainframes.
At first glance, the answer seems straightforward: primarily, computers are for the military. In 1955, the largest revenue source for IBM’s computer division was the SAGE Project, a Defense Department initiative aimed at creating a computer system designed to provide early warnings across the United States against potential nuclear attacks from Soviet bombers. This project alone raked in an astounding $47 million, with additional military projects contributing another $35 million. In stark contrast, programmable computers sold to businesses only generated a modest $12 million.
With these figures in mind, you draft a memo for your boss, confidently asserting that the impact of computers on society will predominantly manifest in giving the United States a critical edge over the Soviets in the Cold War. The influence on the private sector appears negligible. As you lean back in your chair, light a cigarette, and reflect on the promising future of the defense-industrial complex, you couldn’t be more wrong.
Just two years after your memo, the landscape will shift dramatically. By 1958, the revenue generated from programmable computers sold to private companies matched that of the SAGE Project. The subsequent year, private sector sales equaled the entirety of military contributions. By 1963, within just a decade of your initial figures, military revenue comprised only a rounding error compared to IBM’s rapidly growing private computer sales, which now accounted for the majority of the company’s overall revenues in the United States.
Understanding Contemporary AI Usage Through Historical Lessons
This week, accomplished teams of economists from both OpenAI and Anthropic released meticulously designed reports outlining how their AI models are being utilized. One can’t help but wonder what an IBM report from the 1950s, detailing customer use of their first computers, would look like.
To be clear, the rigor and attention to detail that the AI firms have demonstrated far exceed what our fictional IBM analyst might have achieved. Revenue figures are insufficient to grasp actual customer interest and use; even back in 1955, everyone understood that computers were evolving rapidly and their applications would expand. The AI firms today are buoyed by access to extensive real-time data that could have made IBM’s Watson family envious.
However, the IBM case serves as a valuable framework to clarify the type of insights we aim to derive from current AI usage data.
The AI companies’ reports offer a vital snapshot and recent history of the types of interactions users have with platforms like ChatGPT and Claude. Notable findings include:
- Uptake is sky-rocketing: ChatGPT boasts 1 million registered users as of December 2022, exploding to 100 million weekly users by November 2023. If the current growth rate continues, the number of ChatGPT queries may surpass Google searches by the end of next year.
- There is a noticeable disparity in AI usage between richer and poorer countries; however, intriguingly, middle-income nations like Brazil utilize ChatGPT nearly as much as their wealthier counterparts.
- The most common use cases for ChatGPT include practical advice (28.3% of queries), text generation such as editing or translating (28.1%), and information queries typical of search engines (21.3%). Meanwhile, Claude.ai users most often engage with the platform for computing and math queries (36.9%) and educational tasks (12.7%).
Yet, while these findings offer essential first-order insights, they may not address more profound questions regarding the future of AI and its economic ramifications:
- Will human and AI labor complement or substitute each other in the next 5, 10, or 20 years?
- Will wages increase because the economy remains constrained by tasks uniquely suited for humans? Or will they collapse as these constraints vanish?
- Will AI give rise to “geniuses in data centers” — agents conducting their own scientific research, potentially accelerating the growth of scientific knowledge and the economy?
While numerous experts are asking these questions, much of the existing theoretical work lacks empirical research to validate its concepts. My apprehension is that, similar to the IBM scenario, firsthand details on current AI usage could lead to misconceptions about future implications and impacts on our lives. If we resurrected our IBM analyst from 1956 to examine today’s reports from OpenAI and Anthropic, they might draw misleading conclusions regarding the future of labor.
Why AI Diffusion is Unique
One of the classical insights from the economics of innovation is that new technologies often require extended time frames to “diffuse” through the economy. Zvi Griliches’ pivotal 1957 paper on hybrid corn illustrates this principle: while farmers within specific states rapidly adopted this technology, diffusion from one state to another lagged significantly.
For AI, the data indicates that its diffusion across sectors is quicker than historical precedents. Adoption rates surpass those for online platforms like Facebook or TikTok — and far exceed earlier advancements such as hybrid corn or electricity.
This rapid integration may eliminate the adaptive time afforded by prior technological revolutions, raising concerns about how societies will cope with this accelerated change.
For further information on this evolving topic and insights from the reports, click Here.
Image Credit: www.vox.com






