Why do we blog?
By Paolo Peverini
Since the beginning, the X.ite Research Center was born in LUISS with two main purposes: to investigate in depth the increasingly complex relationship between technology and behavior, both individual and social, from a theoretical and methodological point of view; to foster the debate on topics that, in this area of intersection between technology and behavior, were extremely important for companies and institutions, encouraging the meeting between business communities, researchers, students, etc.
In a few words, we believe that staying in an ivory tower is not only harmful but also boring.
Thus, the Blog we start today won’t be just a mere window, a one-way propaganda but rather a place where everyone can share and discuss relevant themes about academic research, consumption, policy…
We therefore like to think of this blog as a space as open as possible to the contributions and comments of those who are interested and motivated to think about a difficult scenario and whose understanding escapes easy labels or implications of management or institutional policies.
The X.ITE blog is polyphonic. It is a truly interdisciplinary discussion tool according to the core philosophy of our Research Center. For this reason, we will be happy to host and share contributions from scholars and business community members or exponents of the heterogeneous institutional world, by background and experience.
The inclusive vocation of the Blog also consists in directly involving our graduated students in Marketing and Ph.D. in Management, in an innovative educational project in which we strongly believe: the transformative and generative research.
Let’s start with a post from the president of X.ITE - Paolo Legrenzi - that arises from the work realized by Bletchey Park team to focus on the ethical dilemma that goes with the rise (overwhelming?) of Artificial Intelligence, which can be extended to the ascent of technology in general.
Enjoy it!
Technology Penetrates Everyday Life
by Paolo Legrenzi
Turing invented the computer during the Second World War to create something that could eclipse the natural mental capacity of man. His team in Bletchley Park, England (near Milton Keyes) went on to build the first computer that could break the German’s Enigma Machine, a feat that had eluded all of the Allies’ human-powered efforts until then.
However, this technological breakthrough created its own issues. Understanding the German code now presented an ethical dilemma because it was essential the English not act in a way that would tip off to the Germans that their code was broken. If they did, the Germans would have simply changed the code rendering all that hard work moot.
In this manner, the birth of the cold and raw processing of automation was simultaneously accompanied by an equally human challenge of judgment. These twins accompany us to this day.
1 – The Computer Brought More than Cognitive Power: Ethics in Computing
To hide their valuable secret, the English decided to triage and rank what information to act on based on its importance to the war effort. In one stark instance, the ruthless logic of this cost-benefit analysis forced decision-makers to stay silent and watch as the industrial city of Coventry was bombed without warning its citizens. Their internal calculations demonstrated that more lives would be saved in the future if they let Coventry go.
Today, this same problem has reared its head once again within the algorithms for self-driving cars. Who should we program the cars to save in a car crash: driver or pedestrian? Here, we see the same ethical issues that accompanied the birth of the man-computer relationships.
However, there is now a notable difference in how the problem manifests itself. Back in Bletchley Park, the first forms of computing and Artificial Intelligence were essentially extensions of the human mind. They moved repetitive and mundane computational tasks away from man to machine, allowing users to operate them as external augmentations to their existing abilities.
This was the first use of computer technology: Machines that extended the mind.
But, little by little, this equilibrium changed alongside technological growth. The digital revolution meant new types of complex mathematical calculations were possible, beyond the mundane. At the same time, new forms of text, music, and video-based communications brought forth the wide-ranging radio and the TV.
This was the second use of computer technology: Machines that eclipsed the mind of man to enter other facets of the physical world.
2 – Automatic Platforms
Independence from Man – Over time, digital technology began to escape its augmentation purpose and invade everyday aspects of life, evolving into an automatic role where man did not control every single aspect. With these advancements, computers (and computing) were no longer just augmentation devices; they became fully independent of man. The birth of robots – machines that controlled other machines – completed the separation.
Platforms – The next big change was the move toward platform technologies. Perhaps the best example here is Uber. Famously, it’s the world’s biggest taxi company, yet it does not own any taxis. What they do own is the world’s largest network that simply connects drivers (supply) with riders (demand)—all the while collecting data from both sides. They are the ultimate platform. Collecting supply and collecting it with demand is what drives their business.
These platform-based online businesses, lacking tangible storefronts, also tend to rely on the strength of their network over what they actually sell. As such, tech giants like Facebook, Google, and Apple rarely invest the time, money, and energy to create the content circulating through their “stores”. Despite operating one of the largest stores in the world, Apple rarely makes apps for its own App Store.
PLATFORMS AND TRANSITIONS COSTS
These new platforms, organized through the free collective power of the Internet and without the high-costs associated with tangible command centers, have greatly lowered transition costs and barriers to market entry. This has provided a wonderful natural experiment to prove Ronald Coase’s theory on production processes under high or low transition costs.
Coase’s famous theory states that when firms see high transaction costs, production activity becomes internalized. The firms cannot afford to reach outside their walls. But, when transaction costs are low, production becomes outsourced as firms save money to focus on their best competitive advantages.
Markets have now seen this separation (the atomization of production into its component parts) with the birth of enormous digital platforms and the self-governing of informal open-source communities.
THE INCREASE OF AVAILABLE INFORMATION
This impressive reduction in transaction costs has consequently lowered communication costs, leading to an integration of channels and a great expansion in the sheer amount of information available on them.
For example, when I was younger, one would encounter many independent channels to learn about a new product or service. You may have heard about it on radio, seen it on TV, read about it on billboards or newspapers, or even talked about it with a salesperson in a physical store. Now, all these channels have been concentrated into just one: the smartphone in everyone’s pocket.
In an interesting twist, while markets are separating out and atomizing, communication has become completely integrated. As of 2016, online advertisement has not only outstripped all other forms, but is now worth more than all others combined. More or less, there is now just one channel.
3 - The Super-Stimulant Arms Race
A negative side-effect of technology’s rapid march forward has been humanity’s much slower ability to adapt physically. Our bodies are prepared for normal activity in life from 10,000 years ago. Yet, those activities are a minimal part of our modern life. When was the last time you hunted a Woolly Mammoth?
But, the human attention developed during these times remains wired within us. It is carefully attuned to pay particular importance to a few important things at a time. Prioritizing our focus is how we survived the harshness of prehistory.
Today, our ancient attention is attacked and overwhelmed by modern stimuli. Moreover, as we build more powerful and artificial super-stimuli to catch our attention, we risk becoming like the mother birds of certain species who are easily deceived into leaving behind their real eggs by larger and more colorful eggs. There is a risk that technology will turn our basic instincts into dangerous end states.
Furthermore, behind the screens of the increasingly distracted masses, a struggle is taking place to see who can build the best super-stimuli to capture the greatest share of attention. Sadly, our brains are not built to withstand this war. No ancient adaptation has prepared us for how to choose whether to look at a Snap or Instagram story longer.
So, what we are now witnessing are radically new scenarios where base human instincts interact with new artificial environments. The result is clear: Human attention’s adaptive ancient purpose – to work quickly, automatically, and unconsciously to keep us alive – is now being exploited against us for profit.
How our physical evolution interacts with the evolution of technology has obvious implications for our future mental health and everyday existence. We are compelled to face the same ethical choices present at the birth of computing at Bletchley Park.
This is the reality that X-ITE seeks to study.
REFERENCES
Legrenzi, P., Umiltà, C. 2016, Una cosa alla volta, Le regole dell'attenzione, Bologna, Il Mulino
Legrenzi, P., "I danni morali dell’empatia", Il Sole 24 ORE, 5 Agosto 2017
Ramachandran, V.S., Hirstein, W. 1999, The science of art: a neurological theory of aesthetic experience, Journal of Consciousness Studies, Volume 6, Numbers 6-7, 1 pp. 15-51(37)