top of page

The Journey toward Artificial Intelligence and the Promise for Human Intelligence

Updated: Mar 29

By: Gina Sanchez, Chief Market Strategist, Lido Advisors & CEO, Chantico Global

In 1947, three scientists at Bell Labs replaced the vacuum tube with the first transistor. This scientific advance was not noted or celebrated at the time, but today we recognize it as the foundation of the information age.


The next step in the growth that has led us to the new dawn of machine learning and artificial intelligence, or AI, was in 1957 with the first optical amplifier, which allowed for faster communication through boosted light signals.  Together, the transistor and the optical amplifier allowed for exponential growth in the transmission, digital storage, and computational capacity of information. The volume of data created, captured, copied and consumed worldwide has grown from the 2.6 exabytes (EB) stored in 1986[2] to the current estimate of 120 zettabytes (ZB)[1]. For those not versed in byte calculations, one zettabyte is equal to one trillion gigabytes and one exabyte is equal to one million gigabytes. While this massive growth in data availability is impressive, on its own it’s not sufficient to get us to AI. We also needed interconnectivity that allows computers to access that vast amount of available data. Interconnectivity has also grown in both the amount of data available to be accessed but also the speed at which it can be accessed.


The final ingredient has been the evolution from the simple transistor to the microprocessor is the race towards thinner, cooler and faster semiconductors. With this, we have seen the computational capacity of computers soar. We can send instructions to computers to take vast amounts of installed information, access it through fast optical cable and execute algorithms designed to mimic human thinking faster than ever. By 2007, we reached the point where a general purposes computer was able to surpass the number of estimated synaptic operations of a single human brain[3].


And yet, computers could not “think.” They could be given a set of rules and strategies and respond to stimuli, such as when IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997[4]. But Deep Blue had no memory or ability to learn from mistakes. Today, most AI is classed into broad categories such as learning algorithms, reasoning algorithms and self-correction algorithms. These algorithms can mimic language construction, forecast biologic responses based on genes, and recognize complex face and voice patterns.


So, what does this mean for humanity? For the moment, we see a potential turning point for productivity. After almost a quarter century of massive investment into the technology arena, we may finally start to reap the productivity benefits as large language models can do basic research summarization and productive models can help reduce data noise and keep us focused on factors that matter.


But the ultimate promise of technology generally and AI in particular is the notion that not only will the algorithms get smarter, so will the users of the algorithms.


[1] Statistica. “Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2020, with forecasts from 2021 to 2025 (in zettabytes)” (Data growth worldwide 2010-2025 | Statista)

[2] Hilbert, M.; Lopez, P. (2011-02-10). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science332 (6025): 60–65. (The World’s Technological Capacity to Store, Communicate, and Compute Information - NASA/ADS (

[3] Gillings, Michael R.; Hilbert, Martin; Kemp, Darrell J. (2016). "Information in the Biosphere: Biological and Digital Worlds". Trends in Ecology & Evolution. 31 (3): 180–189.

[4] Yao, Debrorah (2022) \. “25 Years Ago Today: How Deep Blue vs. Kasparov changed AI Forever” (25 Years Ago Today: How Deep Blue vs. Kasparov Changed AI Forever | AI Business)

87 views0 comments


bottom of page