The Future Of Human Evolution

In terms of the evolution of species, human beings have evolved the fastest on this planet. From being an ape-like structure, today, homo-sapiens are ruling the earth. Moreover, humans are one of the latter arrived species on this planet. It is the human brain that is responsible for such a speedy rate of evolution.

Today human beings are the supreme controller of the planet earth. However, there has been a series of events that have resulted in the human species evolution. During the earlier stages of human life on earth, humans lived in a nomadic way in groups, and hunting and gathering food was the only source of livelihood. Human beings were as vulnerable as other wild animals during that stage as they did not possess any skills or weapons to protect themselves.

However, slowly but steadily, human beings began to evolve. A number of factors helped in the early growth of early human civilization. The invention of fire, the beginning of agriculture, and building weapons and tools helped human beings live in a group from a nomadic life. Domestication of animals also helped our species to gain more power among the other group of animals.

Once human beings started staying in groups, kingdoms gradually formed, and official administration started. The growth in the human species was slow and steady during the era of kings and kingdoms. However, human beings witnessed growth in a rapid state after two important events. One is the industrial revolution, and the other being the emergence of Information and Technology (IT).

After the industrial revolution and the IT sector’s growth globally, information became readily available to people, irrespective of their class, caste, religion, economic strata, or color. Therefore, in the last century, we as human beings saw colossal advancement and growth in the field of science and technology. As a result, today, human beings live a life of luxury and comfort, which they haven’t thought of a hundred years back.

Human evolution is currently towards its most progressive stage. Today we can lead a life of comfort and luxury. Nothing is difficult in today’s world, thanks to the advanced and innovative equipment we possess. In the present scenario, we are the rulers of the planet world. So is there any further advancement that human beings as a species can make in the future?

What is the future of human evolution? There is a sure possibility that our cognitive capacity will merge with artificial technology to make human beings even more powerful. There might be a time in the future where human beings will not require any use of technological innovations as they could carry out the task by themselves. In the future, we might see super humans who will comprise humans, computers, and artificial intelligence.

Homo Sapiens are Insatiable Infovores

We know about herbivores, carnivores, and omnivores. We learned those terms in elementary school. Deers are herbivores and tigers are carnivores. We, humans, based on our food preferences, are omnivores. However, food is not all humans consume. We consume everything we read, listen to or watch in our day-to-day lives. In order words, we consume information, so much so that a new term “infovore” has come into existence and become increasingly popular over the years. In 2013, it was named the Word of the Year by Macquarie Dictionary.

Who is an Infovore?

An infovore is someone who craves and consumes information on a regular basis. If we are to look at a more formal definition, infovore is a person who indulges in and desires information gathering and interpretation. We, humans, have a voracious appetite for information. This explains why you are reading this article instead of watching clouds float in the sky.

The term was introduced by two scientists – Irving Biederman of the University of Southern California and Edward Vessel of New York University. They coined it while researching why we enjoy seeking information and learning something new. According to them while the term is new, our quest for knowledge isn’t. It’s almost as old as our basic instincts and only more pressing needs such as hunger for food can suspend our innate desire to seek and gain new information.

Why Do We Have an Insatiable Hunger for Information?

To put it simply, we are biologically wired to do so.  According to Biederman, all of us have a feedback mechanism in our brain that rewards us whenever we acquire new information. His theory was built upon a 25-year old finding regarding mu-opioid receptors, binding sites for natural opiates. These receptors modulate a number of physiological processes, including the perception of pain and pleasure.They also trigger the brain’s reward system, and are responsible for initiating addictive behaviors in humans.

Biderman used fMRI, a brain scanning technique, to measure brain activities involving the mu-opioid receptors in response to various visual imageries. He found that the human brain derives pleasure from gaining information and prefers novel images over familiar ones. While his work has documented our brain’s response to visual information, he theorizes that it responds similarly to other senses. Let’s look at the science behind his findings:

The mu-opioid receptors are distributed in a gradient, gradually increasing in density along the ventral visual pathway, a part of the brain involved in recognition and processing visual information such as objects, faces, places, etc. They are distributed sparsely in the early stages of the pathway, especially where the brain first receives the visual stimuli, but they are densely packed in the parahippocampal cortex and rhinal cortex areas, which are later stages of the ventral visual pathway. These areas are linked to comprehension and interpretation of images. 

The parahippocampal cortex and rhinal cortex areas engage with our memories, both semantic (such as facts) and episodic (such as events, experiences) and they get activated when our brain tries to process and interpret whatever it’s seeing or hearing. So, if a visual stimulus contains a lot of interpretable information, it increases neural activities in those areas, resulting in increased stimulation of mu-opioid receptors and the release of endomorphins. In fewer words, we experience a neurochemical reward or pleasure, when we acquire new information, the greater consumption of information, the greater the pleasure. 

The Desire for Novelty

When it comes to information, we also prefer novelty. So, a stimulus, even if it’s a very interesting one, becomes less and less pleasurable with repetitions. For example, we may not find a movie or a podcast as engaging and enjoyable while watching or hearing the same thing for the second or third time. This happens because an initial presentation activates a large portion of neurons. However, only a few cells get strongly engaged while others are moderately or weakly engaged. With repetitions, the connection between those few neurons becomes stronger and stronger, inhibiting those which are moderately or weakly engaged. Thus, repetition results in a net reduction of neural activity in the brain.

According to Biderman’s quote in the USC release, “The system is essentially designed to maximize the rate at which you acquire new but interpretable (understandable) information. Once you have acquired the information, you best spend your time learning something else. There’s this incredible selectivity that we show in real-time. Without thinking about it, we pick out experiences that are richly interpretable but novel.”

Why is Being an Infovore Good?

The brain’s feedback mechanism, which rewards the acquisition of new information isn’t accidental. The human brain needs constant stimulation to stay healthy and young. Anything that requires mental effort, including gathering and interpreting new information or solving puzzles triggers brain activities, which form new connections between neurons and may even help the brain grow new cells. 

With constant stimulation, the brain develops neuroplasticity – the ability of the brain to change itself, both functionally and structurally, in response to a new stimulus. A well-functioning brain helps us regulate our emotions, attention span, and improves our memory, ability to learn and solve problems. So, our “infovore” nature ensures that we constantly stimulate our brain, preventing or slowing down cognitive impairment as we age.

As we are currently living in the Information Age, Biderman’s and Vessel’s work is more relevant than ever before. Today, information is available in abundance and we can readily access it at the touch of a button. It is well understood that when it comes to information, more is better. Knowingly or unknowingly, we have been acknowledging and catering to the “infovore” in us. And, the advent of new technology just made it easier. It’s no wonder why in the 21st century, the volume of information we’re creating and consuming information is unprecedented. 

Curiosity- an innate part of the human nature

If I tell you that this article will divert you from your goals and ambitions, irrespective of what they are all about. It will not end up providing you some live-saving information that you are looking for. Rather, I would like to say that this article has the potential of teaching you some valuable lessons about your inner child.

Though all this sounds discouraging, I am sure you will still jump on to the next section of this article just for the sake of reading it. You will read this article out of curiosity about what the article has in store for you. You definitely will continue reading to find out a little more of what this article has to offer. 

This is what explains curiosity to some extent. Human beings are always inquisitive about nature, and most often, this curiosity is about the little tittle-tattle of our lives. Our curiosity that lands up in doing unproductive things like reading about a personality we’ll probably never meet or spending time understanding subjects that won’t be of any use a few months from now, or simply exploring places we’ll never visit again in our lives.

The fact that we humans are so curious about getting our “Why’s” answered is a curious case altogether. Human curiosity has thus managed to become a point of discussion for innumerable books and studies. It is a human’s curiosity that leads him to explore, learn, discover, build and invent. 

Humans and animals are both alike when it comes to being curious. However, humans stand out by being the more curious ones. Animals, like humans, are curious but are incapable of taking their curiosity bar beyond being ‘curious.’ In contrast, humans are filled with the thirst to satiate their curiosity by asking and looking for answers to the ‘Why’ behind every question that pops up in their minds. A trait that is lacking in animals.

Is Curiosity an Inherent Trait in Humans?

If we consider various studies that have been conducted so far for understanding the nature of human curiosity, it can be said that yes, it is. Many studies have proved that there is a strong genetic component to curiosity. Until substantial studies are nullifying this fact, it is likely to remain the ultimate truth. This has become an established knowledge so far, which cannot be denied.

It has also been found that curiosity can be passed down from one generation to another, just like any other trait. Studies have also managed to reveal the fact that some people are more curious than their counterparts. A yet another fact that various studies and findings have established is that all human beings are curious. However, the object and degree of curiosity differ from one individual to another depending on the person and the situation. In contrast, people suffering from deep depression or a specific kind of brain damage are like to lack curiosity, unlike normal human beings who are all curious and willing to find an answer to their ‘Why.’

Different Types of Curiosity:

Most of us might have never thought that there are different types of curiosity. Interestingly, human curiosity can be classified into two broad categories- perceptual curiosity and epistemic curiosity. Furthermore, these two different kinds of curiosity can be seen in different parts of the human brain during functional MRI scans.

Perceptual Curiosity

The first kind of human curiosity came to be known as perceptual curiosity. Perceptual curiosity is the one that inspires a person to seek to learn something for solving immediate problems. For instance, these problems can be finding food, trying to remember an old friend’s name, or trying to recall the lyrics of a song you loved in your teenager. This curiosity is felt by an individual when he/she encounters an object or a phenomenon that fails to match up with his/her established knowledge and understanding. 

Perceptual curiosity is primal and exists on a continuum between fear and satisfaction. In other words, perceptual curiosity comes to play when there is a fear of negative outcomes, and hence it forces us to take action. How embarrassed will you feel if you fail to recall the name of an old friend you have come across after ages? Trying to recall their name is what perceptual curiosity does.

It can also be said that the role of fear in human nature is double-edged. On the one hand, fear of going wrong or of a negative outcome motivates us to be curious to find a solution to an existing problem at hand. While on the other hand, too much fear forces us to shut down and take a backseat.

Perceptual curiosity is driven by novelty. Though this is regardless of the age a human being is in, the desire to find novel things is higher as one gets older. This perceptual curiosity attributes very well explain why people in the older ages of life tend to support the status quo, why a boring and monotonous job wears them down, and why people ask fewer questions when they are frightened.

Epistemic Curiosity

The other type of human curiosity is named epistemic curiosity. It is a curiosity which unlike perceptual curiosity, is driven less by external factors and is affected more by emotions. Epistemic curiosity is the one that gives birth to the human love of knowledge, the desire to explore and learn new things. It is said to have been based on the process of satisfying curiosity for the pleasure that comes to a human from mastery. 

In simple words, epistemic curiosity is the desire to know about its right. Epistemic curiosity is the one that derives scientific discoveries, new philosophies, and people asking bigger questions that help in the growth, development, and fabrication of human society. 

Epistemic curiosity is much cooler, is more rational than perceptual curiosity, and is deeply rewarding and pleasurable for those who can satisfy it. Epistemic curiosity is experienced as a pleasurable functionality that comes with the expectation of rewards hidden in acquiring new knowledge.

Though both types of human curiosity are quite different from one another, it is believed that both of them are essential for a human’s ability to lead a happy and productive life. Though the love of knowledge creates rewards for its own sake, it is important for humans to feel the results as well and hence both perceptual and epistemic curiosity is of great significance.

Importance of Being Curious

Albert Einstein once said, “The important thing is not to stop questioning… Never lose a holy curiosity.”

An important trait of human genius, human curiosity has a genetic component at play.

Here are a few reasons why being curious is important for the growth and development of an individual:

  • People with curious minds are always active. It makes your mind stronger with every mental exercise.
  • Being and staying curious makes our mind observant of new ideas.
  • Curiosity always opens new worlds and possibilities for humans and society.
  • Curious people are always excited and are far from being boring. Their life is neither dull nor is bound by routines.

Thus, keeping an open mind, asking questions, and seeing learning as full of fun help in building curiosity. 

Why are humans hungry for more information!!!

Evolution is a changing constant every organism existing in the world has gone through. But the organism in which evolution has occurred the most are the human beings, eventually the human brain. According to research, humans have the largest brain in proportion to any living creature’s body size. Evolution has shaped the behaviors and habits of a human. “Survival of the fittest,” as stated by Darwin’s Theory of Evolution.

Amongst the growing challenges, humans have evolved their bodies and complex minds. Along with the evolution of sensory systems as adaptations to exclusive environments, the capacity to process large amounts of sensory information increased and the power to create more complex stereotypes of reality. People are hungry for wisdom, obsessed with knowledge, and addicted to information. We live in the information age; it is in which knowledge has decimated into information, and the bulk of it flows in the opposite direction, away from wisdom. One of the signs of addiction and illusion is that the more you eat, the hungrier you get.

In primitive times, survival was the only rationale for the evolution of the human brain. But in this modern era, competition has its base. Science, technology, or every comfort of today’s world is the creation of humans only. The evolution of the mind is the only reason behind this. This has made humans hungry to gain more knowledge to evolve more since the ancient days.

For Continuous Adaptation of Social Changes

Human beings are social animals. Building up a social relationship is common in humans. Besides the increasing time, humans are more inclined to build their relationships. Human minds are more familiar with surviving in groups and cooperate within themselves to fight against their enemy. This has made humans work together as a community.

Humans started building their community and the earth a better place for their survival. They lived in groups called family and made their bonds stronger and positive with other humans too. Humans started living with animals as their pets. No doubt that humans love social animals! 

 Besides time, humans started making gadgets to keep themselves connected with more and more humans. Earth is the most prominent place to live in. So, meeting everyone and keeping in touch with everyone is not possible. But humans resolved those problems too! 

Now, due to mobile phones and technology, humans can keep in touch with each other globally. No matter if you want to connect with someone on the moon. Every single thing has become much more manageable. 

To Anticipate the Threat

Survival is also a reason for the rapid evolution of the human mind. Humans started to strengthen their strategy to survive against the challenges. Initially, they started making tools to kill animals for their food and defense from wild animals. Along with the evolution of the mind, the size and power of tools were gradually replaced by weapons. What could be a better example than the Atom Bomb released in Japan?

Humans long for information and knowledge to put that into work effectively and make things easier. Humans always tried to gather information in every field. This is the reason why we are now at greater heights, and nothing is difficult to achieve. 

We, humans, need the challenge to overcome; this strengthens our knowledge and willpower. Apart from all the world species, only humans have evolved and became much successful in everything they wanted to.

Humans also developed a lot in the field of medicines. The fear of death made humans do miracles. Even the impossible things are made possible. To date, there is almost no disease left that cannot be cured through medicines. Even the deadly COVID-19 has got its cure through vaccines. Earlier, people were unaware of the disease and its treatments. But human evolution resolved this problem too.

Various other factors lead to the evolution of the human mind to anticipate the threat. Fear is the stimulus that makes man develops through ages. The invention of different reforms, armed forces, defense techniques, etc, is the results.

FOMO- Fear of Missing Out

Social media has a different craze these days. Social media have entirely altered the way people interact. FOMO is one of the results of social media. If not checked in time may lead to problems like stress, anxiety, lack of sleep, over thinking, etc. More or less, social media have become an addiction to the current generation.

Humans tend to grab more and more information from social media. It creates competition and panic among people. For example, if one knows something more than the other, it creates an envious feeling and also fear of missing some information the other knows about.

Social media connects people all over the world and thus binds relationships together. But the FOMO arises when the other person with whom you want to contact but they are offline. This may create a feeling of fear within you. FOMO is a negative result of people addicted to social media. Since the arrival of social media, FoMO has become more apparent and has been studied more often. Social media has accelerated the FoMO phenomenon in several ways. It creates a situation in which you start comparing your life to that of others’ lives.

FOMO may include:

Fear of losing reputation.

Missing the ability to be interesting

It is missing the ability to get the correct interpretation.

Missing information due to large volume

Missing timely interaction

Losing popularity

  • Missing a valuable opportunity.
  • Hundreds of other fears.

FOMO can apply to anything from a party on a Saturday night to a promotion at work, but it always involves a sense of helplessness that you are missing out on something big.

Social media creates a platform for boasting. It is where things, events, and even happiness themselves seem to compete. People are comparing their best experiences, which may lead you to wonder what you are lacking. Unfortunately, social media engagement is not always the way to get information; you might be running from one bad situation right into an even worse one.

Self-control is the key. Patience is the best thing to practice. Anxiety, fear, and depression are way more toxic than they appear. Humans are good at hiding their emotions. Well, that is not the solution rather than keeping in control. Meditation can help in controlling such negative feelings. One must stop comparing themselves with others. This reduces a lot of competition itself. Every human is unique and must appreciate oneself for the same. Change your focus apart from social media.

Being Up to Date with the Environmental Changes

 Environmental changes have taken place, since the past ages, from minimal to extremes. Humans have evolved themselves much to cope up with the changing environment. It is also true that humans are also responsible to some extent.

Considering the interaction between past environmental and evolutionary change informs scientific and public awareness about natural environmental statistics. Due to the continuous environmental change, significant development has been seen in our species. Humans were always hungry to know more information to eradicate the root cause of it.

Humans dealt with every single change in the weather and even managed the harsh weather with utter intelligence and skills. Natural calamities, pandemics, etc., were resolved by developing better survival strategies. Humans have interacted with their surroundings through rapidly changing technologies, harvesting of foods, and exchanging resources. The way of life afforded by the transition from hunting-and-gathering to food production proved such successful humans’ intelligence. However, over the past several centuries, these developments have dramatically expanded human influence on global ecosystems.

Humans have made every visible thing much more straightforward than before, simultaneously making it more complex. Our environment must be kept clean and healthy for every species on earth to survive. Increased construction of factories and industries has made the environment toxic. Deforestation, global warming, etc., are also the child of human creation. Humans are responsible for making the planet earth hotter day by day. Our seas are more poisonous. We see seen extremes of weather changes like tsunamis and cyclones, and the ice-caps of the mountains are melting at a rapid rate. And that is just the beginning of the story. Very soon, we should change start using alternative sources of energy by putting less pressure on non-renewable sources of energy. Renewable source of energy are better for our environment. The alternative sources of energy not only decrease the pressure on non- renewable products but also are harmless for the environment.

Information Overload: The Junk Food To The Human Brain

Over the last century, the human brain’s curious nature has been nurtured pretty well by the progress of science and technology. For instance, exactly 100 years ago, an average person’s mind could apprehend information of 100 books in the entire lifespan. However, according to a research conducted in 2007, an average person’s mind can accumulate information worth 85 pages of newspaper in just 5.50 minutes.  

With the emergence of the internet and online websites, there has been an explosion of information obtainable to human beings. Moreover, in the last couple of decades, there has been the evolution of infographics as a source of information that is far easier to read and comprehend. Therefore, the human brain is now receptive to a wave of facts, figures, data stories, and images. 

These infographics are an easier medium to store information in the brain. However, is this knowledge becoming too much for the brain to store and comprehend? Is this a shortcut way of memorizing the facts and figures, creating an overload by acting as junk to the brain? Are we going to enter into the era of information overload?? We will discuss all the above-mentioned factors in this context.

What Is Information Overload

There is a thin threshold line between an informative brain and information overload. While an informative brain helps boost an individual’s productivity by enhancing creativity and skills, information overload acts as a negative catalyst for productivity and creativity, thereby reducing a human being’s working capabilities. In other words, while an informative brain is a key to an individual’s success, information overload brings nothing but failure and troubled work culture.

Information overload usually transpires when a person cannot perceive a piece of information as it is more than his/her intelligence quotient. This overload of information is the cause of losses of several billion dollars worldwide. According to Dean and Webb’s research in 2011, the world suffers from a financial loss of 650 billion dollars per year directly or indirectly due to information overload. 

It is interesting to know that even though the term “Information Overload” is getting due importance in the past few years, the concept is not as modern as we think. The problems of information overload date back to as late as the thirteenth century AD. (Of course, people those days did not phrase the term of information overload). As per the research of Blair in 2012, people too complained about their inefficiency in completing a task due to the bombardment of information that could not be processed within a stimulating period of time. However, the reason for information overload remains precisely the same in these two eras: lengthy books, shortage of time, and type of data that is way over than the intelligence capacity of a person.

Over the evolution of humanity in modern history, two important incidences made information availability less costly. One was the innovation of Gutenberg’s Printing Press, and the other was the evolution of Information and Technology (IT) during and after the era of the industrial revolution. Information overload incidence was a rare phenomenon before these two radical innovations as data facts and figures were restricted to a wealthy and privileged group. 

More precisely, after the abundant IT growth, it made data and information available to people of all classes, caste, gender, irrespective of their economic strata. Therefore, within the few decades of IT’s growth, the term Infomation Overload began to gain popularity worldwide. 

Researches on Information Overload were at the peak in the 1980s and ’90s but gradually came down during the 2000s. However, in the last few years, looking at the explosion of information available easily via the fast-paced internet, the research level is back at its peak. Recent research has focused on new social media and artificial intelligence as potential catalysts of information overload and has obtained positive results.  

Definition Of Information Overload

No doubt, there has been intense research on information overload for decades now. However, a widely used standardized definition of Information Overload is still unavailable. Eppler and Mengis, in the year 2004, listed seven definitions of Information Overload. 

Working Definition

The first definition is the working definition of information overload. According to the working definition, Information Overload transpires when a person gets an abundance of information, thereby creating a poverty of attention to the large piece of data available in front of the person. While the researchers have been operating on the working definition of information overload since the 1960s, the internet and IT era have helped develop a firm affirmation to the working definition of information overload.

How Information Overload Affects Cognitive Capacity Of An Individual

Today, we live in the era of the information age where we are bombarded with tons of information within a fraction of seconds. However, this drowning an individual with an unprecedented deluge of data. Shenk, in the year 1977, termed the unprecedented deluge of data as data smog and the “muck and druck of the information age.” 

In today’s era, information is readily available to a decision-maker. Moreover, along with the required information, the decision-maker is also open to the abundance of extra information with little to no additional cost. However, as discussed in the above sections, data was not available freely before the IT era.

Therefore, during the pre-IT Times, the pay for processing a piece of information was too high. Employees were paid handsomely for calculating, evaluating, and interpreting data within a limited period of time due to the absence of modern technologies.

However, in the present day, processing a piece of data, which required several days in the pre-IT era, needs just a fraction of a second to produce the most accurate result. Thanks to the almightly technologies such as scientific calculators and modern computers. 

The fast processing of information has resulted in the accelerated growth of human civilization. No doubt, IT’s progression has made the living of an average human being more comfortable than ever with the growth of advanced technologies; there is no denying that it has an adverse effect on the human brain’s cognitive ability. 

The simple reason is the un-proportional growth of technology and the cognitive capacity of the human brain. The speed of technological advancement is way faster than the speed of developing the human brain’s cognitive capacity. In 1971, Simon and Newell discovered that the limitation in holding information in our short-term and long-term memory is the major contributor to the lack of access to the information available in the limited time.  

While people in the pre-IT time surely lacked the technologies, they often got more time to access and interpret the limited information available to them. Therefore, with more time came a more comprehensive interpretation of the data. Moreover, as the individuals were exposed only to a limited piece of information, there was no overloading question. Thus, even with technological constraints, human beings had a brain with more cognitive power to hold, interpret and store all the information.

However, in today’s age, two factors result in information overload. First, with the advancement of technology, complex data can be interpreted within a fraction of seconds. Therefore, there is no time for an individual to process and analyze the information. Secondly, with the evolution of the internet, there is infinite data available to an average human being. Therefore, it is becoming impossible for an individual to chuck out the information junk and keep the most required information source. Thus the most adverse effect of the advancement of technology has been on the human brain.

Thus, iInformation overload has made a direct impact on the cognitive ability of an individual. Thus, todayToday the majority of information gathered is not able to process by the human brain. T Thus, the advancement of technology leading to information overload has decreased an average individuals productivity, making him/her less beneficial in a work atmosphere. 

Today’s information is becoming cognitively too burdensome even to do basic information sorting. Moreover, the decision-making ability of an individual also deteriorates for more information overload. Part of it is also responsible for the Infographics way of providing information which is acting more like a dopamine hit than enabling our analytical brain

Infographics: The Junk Food For The Brain

Since the last decade, there has been a boom in showcasing infographics as a source of information. Not only are infographics attractive to the eyes and draw the readers at one go, but they are also easier to read, understand and comprehend. In fact, people of all age groups, whether children, adults, or old age, can easily comprehend the information shown in the infographics. Thereby, its popularity increased drastically in the past few years.

Definition Of Infographics

Infographics are the simplest way of providing information to a group of people. When a piece of data is presented in a pictorial form with diagrams, graphs, and pictures, the information model is defined as infographics. In other words, an infographics model is the representation of data through a mixture of words and pictures. Therefore, because of the presence of diagrams, infographics are very easy to comprehend. Moreover, with the evolution of the internet and the emergence of social media, today, infographics have become the ideal way to share data.

How Infographics Causes Information Overload

Due to the easy nature of interpretation, today, infographics are the most widely used sharing information method. However, its simple nature has resulted in the degradation of the cognitive ability of human beings. The interpretation and analyzing levels are too low in infographic content. Therefore, when an individual is exposed mostly to the infographics world, he/she losses the plot when it comes to analyzing complex data. Therefore, the individual loses the decision-making capacity and problem-solving whenever the data becomes a bit complicated. 

Moreover, too many infographics also create confusion in the cognitive ability of the individual. Due to the abundant nature of data, an individual tends to get across several infographics content within a short span of time. Therefore, it becomes difficult for an individual to chuck out the unnecessary data as everything seems important to him/her, thereby creating information overload.

Thus, infographics in this modern age act like junk food to the brain. It causes information overload, but it also affects the brain’s cognitive ability, thereby making the individual less effective in a work environment.

Final Thoughts On Information Overload

The modern world is the source of infinite information. However, accessing the information in one human brain is next to impossible. It is where the concept of Information Overload develops. 

After all the discussions, information overload can be defined as a state in which a decision-maker faces a set of complex information with characteristics such as redundancy, contradiction, and inconsistency and comprising the accumulation of individual informational cues of differing size and complexity that inhibit the decision maker’s ability to determine the best possible decision optimally. 

Digital dualism: The fading distinction between the digital and physical world

Let’s start with what exactly is “digital dualism”? 

Digital dualism is a proposed idea which states that the human world has two different realities that are the online and the offline world. These two realities are separate and do not overlap with each other according to this term. Offline reality is referred to as the Physical World, and online existence is referred to as the virtual world. The term digital dualism was coined by the founder of the blog Cyborgology Nathan Jurgenson in 2011.

The term digital dualism is invalid. It seems to be a bad influence for academics like Sherry turkle. The digital world is human’s second self, which helps to popularize the concept that the social network films are screening. It is shown in the movie that real-life connections are being traded for something digital. Modernization is proving the term wrong, as it is sensible to divide ourselves into two selves that exist in different realities.

Ten years back, the term seemed to be a fact, but as the virtual or online world started to get more and more ingrained in our lives, this term became more and more invalid. It’s instead an abstract and limited concept with no growth, and it fails to support its meaning adequately with time. 

In 1960 a psychologist and computer Pioneer JCR Licklider stated: “that is not too many years human brains and computing machines will be coupled together very tightly and that the resulting partnership will think as no human brain has ever thought and process Tata data in a way not approached by information-handling machines we know today.”

The fading distinction between the two worlds

The Digital World and the Physical World overlap and work simultaneously. The virtual world came into existence due to the presence of the physical world. 

They coexist in this era rather than being an independent reality. Virtual reality or the digital world entirely depends on the physical world, but vice versa cannot happen. That is, the physical world cannot and does not depend on the virtual world entirely. It will still exist if, for any unfortunate reason, the virtual world ceases to exist in this stage of evolution. 

The virtual world mainly consists of various social media platforms like Facebook, Twitter, Instagram, Tiktok, etc. The social media life and image and status maintained determine people’s behavior towards each other in the physical world. 

Social media life has evolved; at first, they were just a way of communication or interaction between two people in the opposite ends of the world, but then it became a platform to make new friend across the World by Virtually getting to know them and bring them Closer by sharing memories awesome movements with them across the platform. The social media platform enables the users to text, talk, and see them in real-time virtually. It is now an inseparable part of our life.

Social media has lessened the real-world distance virtually 

The online platforms are also helping this generation with almost everything. This generation pays bills, earns money, and gets groceries, food delivery, etc., with few clicks on their mobile phones or devices. Thus, the virtual world merges with physical reality and becomes a new combined reality that will change communication and interactions between humans. 

Our virtual decisions are affecting us in the physical world in all aspects. Now the pictures or images are the new physical reality. The devices are the new workstation; from work information till meeting rooms everything is just available to us with only one touch. 

The virtual world will keep growing and overlapping with the physical world. It will soon turn into a hybrid reality where the worlds will coexist that is both will be entirely dependent on each other in the upcoming decades and generations. 

The upcoming generation will have a different reality. They’ll experience both the physical and virtual worlds simultaneously. That is, the human senses will adjust to the new normal of interacting in the hybrid environment. In this reality, every action or activity, or job will involve the virtual world equally with the physical world. It will become a necessity rather than just a side tool. And that will change their real world. 

The digital and physical world separate; technology such as Twitter, Facebook, and LinkedIn are often used to connect people both in the virtual world and physical reality. This occurs, for example, when people meet online and then meet those virtual friends to form a deeper relationship in person. The meaning of connection and interaction has changed drastically in these few years. We are getting to know people through online platforms than in real life. It’s the nee new normal, and humans have adjusted to this change quite well and efficiently. 

The merging of human and machines: 

Two frontiers of emerging Technologies

The hybrid reality is coming into life as we are moving towards its progress and advancement in Technologies. Future technologies won’t stay limited to just separate devices but will also be built within us to make our lives easier and comfortable.

Humans invented machines to help them out with Complex problems and treated them as near tools. Still, with the advancement of technologies, it has been possible for human and machines to work as co-workers rather than just tools to stop this collaboration. This change is by far the most successful one as humans and machines depend on each other to overcome the setback that was preventing them from reaching their full potential. [JM1] 

The most fascinating and impossible things are coming into existence. The merging of artificial intelligence with biology is one of the most Beneficial inventions made by humans. 

Let’s now talk about what exactly artificial intelligence is?

The term was first coined by the computer and cognitive scientist John McCarthy in 1956. He founded it at a summer conference at Dartmouth College. 

The Ai AIis machines that can now react to the surroundings by themselves without the human’s orders. They are built to think like humans mimic their actions. These machines are open to new experiences and a not just limited to few commands or fed information. They learn just like humans. 

Though unlike humans, they can do work with perfection and without any mistake. The AI can perform a task that is beyond human control and without even any of lives. The AI can solve significant problems and diagnose humans, even those who have a dangerous contagious disease. Artificial intelligence can go to dangerous places on earth for research beyond human reach or have unfavorable conditions for humans to go in and thus harm their health adversely. Still, with AI machines to our rescue, we do not have to worry about that. 

The growing industrialization has led to the growth of human needs. The Machines counteract this growth. We are currently living in the fourth industrial era and rapidly moving forward to the fifth industrial era. It is predicted that artificial intelligence will completely take over some tasks and won’t need human help.

Artificial intelligence and medical science

With the advancement of technologies in the medical field, it is predicted that in the upcoming new era, artificial intelligence might find a cure for noncurable diseases like cancer, etc., and help make the disability Just a mere term. 

Disabled people can’t do all the tasks a normal human being can do. All thanks today synaptic sensations carried out by the machine attached to disable parts that cause a sensation and move along the brain’s commands. These metallic man-made human parts or organs can receive orders from the brain directly and act by the brain’s power.

Artificial intelligence in medicine helps collect data for patients’ tests and determine the patient’s perfect diagnosis. It also allows data finding treatment methods and monitoring the patient’s body reaction to the particular diagnosis and helps with aftercare and medications. 

Use of AI in Following Things/Fields/Areas:

Virtual Assistant or Chatbots

Agriculture and Farming

Autonomous Flying

Retail, Shopping, and Fashion

Security and Surveillance

Sports Analytics and Activities

Manufacturing and Production

Live Stock and Inventory Management

Self-driving cars or Autonomous Vehicles

Healthcare and Medical Imaging Analysis

Warehousing and Logistic Supply Chain

AI for education

One of the main lessons from COVID-19 is that remote education using proper technologies can benefit learning and should be an integrated education system component. Artificial intelligence can help remote areas gain knowledge and be updated with their present surroundings. 

AI as virtual assistants or chatbots

Alexa, Siri, and Google Assistance are renowned and known examples of virtual assistance. Automated voice bots answering customers’ questions are the perfect example of AI-based chatbots working with humans and giving their best. 

AI in agriculture

In the agriculture field, self-driving or autonomous tractors and AI-based drones monitor the growth of the crops, enhance land productivity, and help yielding crops. Robots are used in the fields to keep a check on the health and conditions of the crop. To boost crop productivity with better plant health and weather monitoring systems while making the entire process trouble-free and with no man force requirement. And data is also gathered to further train such models in the future to better their work in agricultural or farming-related fields.

Autonomous flying or the pilotless machine 

These AI-operated machines will help humans fly without a pilot, and this would mean that a person doesn’t necessarily have to be a pilot to own a flying machine. The AI will make it easier. After a specific time, it would become available in the market and set new standards of traveling and changing the means of common transportation. 

The other examples of AI, fully integrated into such a system to make the machine work automatically while understanding the nearby surroundings and real-world scenario, are the automated cars that will be self-driving. They will lessen the risk of the accident and create a disciplined system of traveling on roads. 

AI security is the best kind of security available to humans as the robots will guard, and they can detect any fault in the system and give instant solutions. 

Neuromorphic technologies

This stream of research had started in 1980 tourist places of neuromorphic technology, and in a short period, humans have achieved a great deal. Humans integrated a system with the analog electric circuit, which mimics neural networks of them. 

In 2018, DARPA (a defense advanced project agency) should consider human controlling machines’ possibilities just by the thoughts. DARPA demonstrated how a small brain chip, when fitted inside the human brain, will signal and Pilot a Swarm of drones.

The neural and artificial intelligence, when combined, can upgrade the level of cognitive and perceptive capabilities of humans. 

Neuromorphic Technologies can help the paralyzed patient communicate. It can also read human thought through cognitive imaging that is the brain activity, and the commands given out by the brain’s commands will be carried out or read by machines.

This will help both control Technologies and help with mental health issues, and it will help lessen the trauma and the brain cannot withstand. 

Within a few years, the education system will be equipped with a special kind of neural Nanorobotic technology, which will allow the students to access all the knowledge and information on the cloud.

Linking brains with computers is no longer a fantasy but a whole new reality, just like telephones and the online world, which were considered impossible just a few decades, but now it’s taking over the world. 

Human-machine biology

The medical science community experimented and tried to invent a nanoscale drug that can pinpoint some particular bacteria strains and destroy them. This will help to cure noncurable diseases like cancer. 

The invention of Bionic arms and legs is now somewhat familiar; the medical community is trying to create implants like Bionic eye and Kidneys and create artificially grown and regenerated human organs that can be replaced when the original organ stop working. 

AratiPrabhakar, DARPA’s former director, quoted that “the third wave of technological innovation is starting, featuring humans that don’t just help us do Or think – they have the potential to help us be.”

DARPA is an organization with the base and force behind artificial intelligence and has provided us with technologies like Alexa and Siri and face recognition and genome recognition. 

The artificial will bring the next wave of technological innovation. 

As quoted by AI oracle and venture capitalist Dr. Kal Fu Lee in 2018, “AI is going to change the world more than anything in the history of mankind, more than electricity.”

With the vast number of advantages, there come disadvantages

The increasing evolution of artificial intelligence will affect human society full stop. This society will change and transform as almost all kinds of human labor will be replaced by robots and machines, resulting in many people losing their jobs, which will create a significant crack between the elite and the ordinary people. This crack will depend on the lower class society will struggle to meet modern age society’s standards.

The Common people will suffer from unemployment. It will grow with the evolution of the technologies and the artificial robots taking the workforce, creating a society with extremely rich or impoverished people. 

Artificial intelligence is being prepared to react to the surrounding, and composed self-thoughts and actions that are the machine will become self-aware. They will surely surpass the human level of perfection and intelligence, making it impossible for us humans to control them in any aspect. Suppose the self-aware artificial intelligence develops negative awareness towards humans. In that case, it will cause the worst scenarios in human history as they will try hurting the humans as they may deem themselves the Supreme Power and see us as a threat. 

Advanced Technologies like the brain chip controlling drones or another machine will turn the world upside down if it gets into the wrong hands, and humankind will move towards destruction. There will be machine wars between nations, and the wars and their calamities and Causalities will be more destructive and devastating than the past wars of human history. 

The major rift in society will leave people with unfulfilled demands, and to fulfill these demands, people will use ethical and unethical ways to reach the top. 

The humans will use it as a tool to earn endangering others’ privacy and will use AI at their disposal to get their work done. 

Artificial intelligence will affect the human race and bring significant climatic changes and new kinds of environmental issues. 

Using and training the most sophisticated artificial intelligent Technologies will lead to an increase in carbon footprint. 

To power machines with this level of intelligence, a larger amount and source will be needed to run these technologies, which will lead to a shortage of sources if humans do not keep a check on the amount of energy they use to fuel those machines. 

With the increase in carbon print, our natural ecosystem will be highly affected, leading to negative changes that may get out of human control and lead to more chaos in the upcoming modern age. 

For all the advances enabled by artificial intelligence, AI systems consume a lot of power, from speech recognition to self-driving cars. They can generate high volumes of climate-changing carbon emissions, thus affecting the carbon footprint in the climate. 

“There’s a big push to scale up machine learning to solve bigger and bigger problems, using more compute power and more data,” says Jurafsky. “As that happens, we have to be mindful of whether the benefits of these heavy-computer models are worth the cost of the impact on the environment.”

“Over time,” says Henderson, “it’s likely that machine learning systems will consume even more energy in production than they do during training. The better that we understand our options; the more we can limit potential impacts to the environment.”

Humans are switching to green AI which will emit carbon emissions but consume less energy and cause less harm to the environment. 

Artificial intelligence is now divided into two categories 

  1. Green artificial intelligence
  2. Red artificial intelligence

Green artificial intelligence 

These are the AI research that consumes less energy and gives out novel and proper results, thus reducing the amount of cost to operate them hence making them cost-effective and eco-friendly for the environment. 

Red artificial intelligence 

This refers to the opposite of green AI. The red artificial intelligence research is not environment friendly and causes significant harm to nature due to many carbon emissions. Also, they need a significant scale source of energy, thus making them costly to operate. 

Humans are now trying to advance but positively, but the evolution is too rapid to stop. They are realizing their faults and trying to rectify them. 

With the continuous evolution of technology, we do not know what level of advancement is in store for us. We can wait to see what the future holds for humans and take appropriate measures with the advancement of technology so that we do not face the negative consequences of evolution. 

Implications of cognitive expansion in the digital world

The internet is a whole different Arena of information. It is the reservoir of all the information all around the world and through all the ages. This network supports the digital world. 

The internet is the reason which brought a revolution in the digital world. The digital world reached its utmost potential and heights in a short time after the internet invention. It’s still soaring higher with each passing day due to the rapid evolution in the technological world. 

Let’s have a detailed outlook on the digital world

The digital world is basically using a computer and another machine to display information and perform a certain Complex task. The digital world is also called the virtual world. The world exists but not physically or in reality to be touched and felt with our fingers. 

The virtual world is only visible to our eyes and impacts us mentally. That is, it has indirect physical effects on us. Humans operate this world with the help of a few clicks on the keyboard. 

The evolution of technologies and the internet has greatly impacted and influenced the human race. We are in such an era where a commoner cannot complete any task without the support of any technological device or internet. 

A few decades ago, it was just a side tool, but now it’s a necessity deeply rooted in our lives. 

The boundaries and the limits of the virtual world are being pushed continuously every day. The advancement of technologies and the virtual world is beyond human expectation, and it is always going to surprise the human race with its new capabilities. Every advancement the digital world makes opens a new sphere of possibilities. 

The number of implications of the digital world is countless. The amount of benefits and the future of society depend on the digital world. The human’s way of thing has changed due to the change of environment and its surroundings. In this era, humans think of fixing a problem with the device’s help or the technology invented or with the information which is already on the internet. Modern technology is helping the human race to move towards advancement. It’s like the human race is climbing the ladder of technological evolution, where it’s going to end, and what fate will bring with itself no one knows. 

The cognitive environment and space are well connected with the digital world, thus changing how a person thinks, perceives, and acts towards others in the physical reality. 

Humans turn to digital devices these days as these are a convenient source of information and easy to access with no complex interface. 

This digital cognitive environment became the most helpful and progressive space for medical and educational opportunities. 

The setbacks of the real world are erased with the help of digital and technological support.  For example, the students were able to acquire knowledge in this pandemic of covid- 19 even without being physically present in the classrooms but from the comfort of their homes with meeting apps. 

And In the medical sphere, doctors can diagnose patients in remote areas without being physically present with them with the help of digital technology. 

And not to forget that the advanced technology has helped the medical officials to reach a stage where the synaptic response of the brains are copied and fed into technologies that help the humans who have lost the ability to move, feel, or see do everything they lacked just like a normal human being. Digital Imaging can be felt with technology. These are just a glance at the wonders of the digital world.

With the growing evolution of technology, nothing can be considered impossible anymore. Anything and everything is possible in the digital and the virtual with the right amount of effort and time. 

Like a few decades back, no human believed they could connect across the world, but the technology and the digital race’s advancement made it possible. Similarly, the things we consider impossible might be possible after some time and may come into existence. 

The digital cognitive environment has influenced humans so much that they think of a plan which involves the digital world in some way or the other.  It’s now an inevitable part of our lives. 

2021, the year of a wave of hyper digitalization

The pandemic has created new hurdles to overcome for humanity. Technologies and the wave of digitalization have to adapt to the change of course in humans’ lives and develop innovations that will allow humans to operate Technology without any contact.

Enhancing digital ecosystem

The pandemic forces the companies and organizations to rethink and revamp the plans of their future project. They had to rethink the approach and a way of a contactless form of communication with its audience or customers.

This kind of approach already existed, but the idea was not implemented or enforced as it was not a need before the Covid19 pandemic. The pandemic situation made the execution of the concept a compulsion.

It was also a great challenge for companies to make it accessible and affordable for all.

As the pandemic made it difficult for humans to access basic amenities

Unlearning old skills

 Unlearning old technological skills and learning a whole new set of skills was a great challenge for the companies. A completely new set of technologies were witnessed and welcomed. The nee normal forced the organization to completely digitalize their workforce, which employees would operate from their home’s comfort.

Digitalization is now a critical transformation needed by humanity to overcome grave situations like the covid19 pandemic in the future.

The “online brain” how the internet might be changing our cognition

The human brain perception changing with evolution, the environment, and reality.  The brain is adjusting to the online world, which is felt to the device’s screen. The human brain accepts its new reality when the internet is not considered an abstract concept but felt and experience in physical reality. 

Our society’s standards are changing according to the online life we have. For example, people with many followers are considered influential, and these people start earning money as Influencers. People are getting hired the details through the online platform if they have a certain kind of talent that attracts them. 

Virtual reality directly affects our brain. The brain gets affected by the social media norms of a perfect life and ideal body and starts torturing its post body for that particular lifestyle. Hence forgetting that the people controlling those accounts do not live a life like that in the physical world. Rather, they share the best moments of their lives. People have started judging each other based on their online profile, typing style, etc. 

On our digital devices, we perform several tasks simultaneously, hence breaking our concentration into small pieces for each task. As humans, we are not Great multitaskers, but the Machines ace the ability of multitasking. Hence, online life enables us to multitask and takes away the power of concentrating on a particular task at a time. For example, how many times have you burnt your food while listening to music and doing other household chores?. 

These days, the technologies we use have a huge storage space that stores all kinds of files but retains through several years unharmed. These devices are compact and easy to carry around with information. With such a vast storage space at our disposal, it has become harder than ever to remember or memorize anything and retain it in human brains. Our brains are losing their In-built natural storage space due to the overuse of external storage, even for basic things. Humans have become extremely forget. How many times have you checked a simple word meaning or spelling on the internet before actually using it in your text, or how many times have you forgotten to wish your friend on his birthday just because you missed the Facebook notification of their birthday? 

The online brain of this generation knows that the whole world is just at their fingertips, and hence this is making them lazy and less productive. Everything has become so easy that humans have stopped working hard to achieve something.

The internet has helped technology to take over the world through progress and modernization. Its applications of passing on data and information within seconds are appreciable. However, the internet has to lead to a psychological imbalance in the brain, which will take some time to adjust. However, the brain’s way of reasoning and thinking is shifting rapidly hence creating issues that we humans face in the real world in the form of mental health issues. 

These disadvantages cannot deny the fact and comfort the virtual world has brought us. However, with the growing intimacy with technology and the virtual world, we face problems and need to worry about the greater risk the future holds. We need to take precautionary measures to stop or at least lessen the virtual world’s side effects. 

The internet has changed the way we access information through our devices. We live in we are living in a hybrid reality where all the information we access our factual and are available to us at our fingertips.

All kinds of information are free and open to every kind of people, and the cognitive reality though advanced, has created a new Space to commit crimes. Internet crime or Cybercrimes are affecting humans physically that is in the real world. Some People are creating a whole set of other people or systems by just sitting behind a digital screen, and they even know that they have fewer chances of being caught as their identity is obscure. Digital reality has given our society a new dark face. The online world though progressive and helpful but is still daunting and energy draining.

The adversity of the Machines and the digitalization of the technology was predicted almost 100 years ago.

E. M Forster published a short story in 1909 named “The machine stops.” It was about the future robots and environment where a mysterious and highly advanced machine-controlled society, humans, basically the whole World. It was an era where Machines were providing humans with everything, from food to information. It was a time where face-to-face meetings stopped, and the humans only interacted virtually. It’s an era where humans became so dependent on the Machines that it became their life support. In the story, the machine takes over the World, the humans at according to the Machines’ commands. But then comes a time when the machine finally collapses, taking down the society along with itself. Therefore marking the end of an era with vast scale destruction leaves humans to witness their self-caused destruction.

The virtual world has become an addiction that affects the human brain adversely and has adverse effects on humanity. The humans just calculated the advantages, the continuous and the bombarding evolution of digital technology and the internet is exhausting the humans. It is pushing them to the very edge of their limits. Humans have forgotten to differentiate between Desire and necessity.

There is a necessity of evolution in the real world scurrying due to the Unstoppable and important inventions that are just near Desire and not the need of the hour.

The Digital Expansion of The Mind: MEMORY AND COGNITION

The authors of the article ‘The Digital Expansion of the Mind: Implications of Internet Usage for Memory and Cognition, E. Marsh and S. Rajaram, highlight about properties of the internet and discuss the different possible implications of how we think process, and use information.

They examine how the ‘digital expansion of mind’ affects cognition. Cognition is a state of mental activity processes involved in gaining knowledge and understanding through thought, experience, and the senses. These processes include thinking, knowing, remembering, judging, and problem-solving.

For highlighting the properties, Marsh and Rajaram conduct a study that begins by identifying ten properties of the internet that are likely to affect cognition structured by internet content, internet usage, and the people who are indulged in content creation. Later, they use these properties to explain the internet activities, mainly the overreliance on the internet to receive a source of information and evaluate it.

The experiment’s main reason was to determine the technical properties of “the internet” and its potential implications for cognition. Internet being the technology of the moment is the necessary outcome of any information technology. With each information technology, there comes a host of problems and worries about whether the technology’s effects would be permanent and damaging to human thought. On this, Plato remarks his concerns in ‘Phaedrus’ as,

“[What] you have discovered an aid not to memory, but reminiscence, and you give your disciples not the truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. . . Once [facts are] written down, they have tumbled about anywhere among those who may or may not understand them.”

These sections highlight some of the most prominent outcomes of the contemporary growth of technologies.

The ten properties of the internet that have consequences for memory and cognition are-

1. Unlimited scope.

2. Inaccurate content

3. Rapidly changing content

4. Many distractions and choices

5. Easy access

6. Requirement of limited research

7. Fast results

8. The ability to author

9. Source information is disguised

10. Intermingling connections


The internet connects the globe with the help of smart technologies. Therefore, there is no limit to how much information the internet can restrain, as there is no specific limit to the internet’s memory power. The internet has much more capacity than an unlimited storage device. However, the human brain has its restrictions, with no one remembering everything that one’s comes across. Therefore, for a long time, humans have studied for alternatives to ease the burdens of accessing information from the virtual world.

Here’s discussing the ten properties of the internet-

PROPERTY 1: Unlimited Scope

Internet is like a vast ocean where one subject matter is just a pinch of water. There is no limitless on the search results of the internet. While the internet content is broad and deep, there is a common misconception ‘everything which is not available on the internet is not worth knowing. This perception about the internet is erroneous as there are also sources beyond it. It is popular among those who regularly use search engines to do their research and rarely go beyond the internet as a source for their study.

  • PROPERTY 2: Inaccurate Content

There is no accuracy of the information available on the internet as almost anyone can create a webpage or post information to social media. Unlike traditional media, where editors and ‘fact-checkers strive for accuracy, there are relatively few internet caretakers. The internet is a mixture of media where authors create their own rules and have little ability to monitor others’ content. The credibility of the content gets hampered.

Some information sources actively strive to convince the reader that their point of view is correct. The internet is full of striking advocacy on many information providers who employ cognitive tricks to make their information engaging and interesting.

  • PROPERTY 3: Rapidly Changing Content

The digital world is a space where content can easily get changed. The internet is constantly evolving, and with it, websites appear and disappear, links break, and pages get edited at a rapid pace, contrary to the steady pace in the world of books. This ease of changing content on the internet increases their frequency (likely or so) to be changed. One can quickly edit the contents at speed not possible for print materials. It is just a realization that the time horizons are shortening, and the work needs to be done rapidly. 

However, the rapid change of content affects its credibility. Moreover, the ease at which people can change the virtual world’s content makes them more vulnerable to the world of fakes.

  • PROPERTY 4: Many Distractions and Choices

Content on the internet is evanescent. The internet offers many extras that can bewilder users. For example, whenever a person searches for information, they get distracted by lots of ads, hyperlinks, and useless data. Such distractions are not unique to the internet. However, it surely distracts the researchers, thereby reducing their efficiency.

However, the internet allows more active participation than while reading a book or listening to a radio. The internet is a place where one can perform more than one task at a given point in time.


  • PROPERTY 5: Widespread Access

The modern-day internet does not require a person to be imminent tech-savvy, and therefore, it is available to plenty. People have access to the internet as it does not impose any technological sophistication and can be carried easily everywhere (even in a person’s pockets), thereby increasing users’ frequency simultaneously. Today’s internet is also easy to use as modern search engines are much more spontaneous to use than older ones.

  • PROPERTY 6: The Requirement to Search

The internet is not a passive presenter of information as it requires an active need to search; the user enters search terms and clicks on links to track down the information he or she needs. There is a body of work in both sensemaking and information foraging theory that makes predictions about human information-seeking behavior, taking into account the cost of search, decisions made during the search, and processing the information found, along with the rate of the return based on the quality of the content. 

  • PROPERTY 7: Fast Results

Simple searches return almost instantaneously and are faster than the conventional methods of going to the library and searching for facts. A quick response provides instant cognitive satisfaction though the user may not be sure about the quality of the results or truth even if it seems more authentic.


  • PROPERTY 8: The Ability to Author

As implied from the heading, anyone can be an author on the internet. In the digital platform, people claim credit for information and can sue others for plagiarizing their content.

  • PROPERTY 9: Source Information is Obscured

The internet’s information does not come with clear and correct attribution. People are less interested in the credibility of the source of information, thereby producing less authentic content. People struggle to find the ideal source for their queries and often struggle to evaluate the quality of content available because many of the traditional cues to source disappear. For example, one might try to teach consumers to look for labels such as “sponsored content.” However, such an approach leads to better camouflage for ads, fake news sites, and other misinformation sources on the internet.

  • PROPERTY 10: Many Connections to Others

Being social beings, we as human beings like to share memories. Moreover, psychological research reveals that information sharing regulates emotion and develops a social bond. The internet makes it easy to share and receive information from others, making it a powerful platform for social remembering and spreading information. 

The increasingly interactive use of the internet—whether to look up facts or to actively edit a blog or a social media entry—blurs the line between knowledge that is internal versus “out there.” In the modern world, knowing how to quickly and accurately find information on the internet often is just as valuable as learning the information oneself.


There are benefits to relying on the internet: more information, social connections, and reduced processing load.

Finally, we speculate that there may be a speed-accuracy tradeoff involved in information consumption, with the wealth of information available on the internet pushing consumers towards speed over accuracy.

Marsh and Rajaram do us an excellent service by elevating the conversation about the cognitive effects of the tools, content, and systems we use. This is a complex milieu, and effects on thinking and remembering occur through an assortment of factors, which will continue.

Consciousness Evolution

Consciousness is an evolutionary phenomenon that is embedded in an evolutionary universe. It is the core of human reality and a medium for humans to understand every meaningful information in the world. The insight of social evolution is not a specific theory. However, it has a frugal connection to the integral theory. 

Human beings have evolved the most than any other organisms on the planet earth. However, there is still the fact that there will be an evolution of humans in the future, or more precisely termed as “the future evolution of consciousness.” The evolution of consciousness suggests that humanity is conscious of history and the rapid evolution in the change in culture and society.

Many Sci-fi movies give ground for thinking and imagining the possibilities of future consciousness. It provides a medium to showcase the idea of wisdom. It serves as an example for the future evolution of consciousness. Humanity can prefer to advance through co-creation, sustainable practices, and cooperation over self-destruction through competition, ecological devastation, and separateness.

Darwin’s friend, the naturalist George Romanes argued in 1885 that-

“Is it not itself a strikingly suggestive fact that consciousness only, yet always, appears upon the scene when the adjustive actions of any animal body rise above a certain level of intricacy. Surely, this large and general fact points with irresistible force to conclude that in the performance of these more complex adjustments, consciousness or the power of feeling or the sense of willing are of some use. Assuredly on the principles of evolution, which materialists at all events cannot afford to disregard, it would bestrange strange the fact that so wide and important a class of the faculties of mind should have become developed in constantly ascending degrees throughout the animal kingdom entirely were entire without use to animals.”

E. Roy John, a neurophysiologist, in 1976 stated that-

“We do not understand the nature of … the physical and chemical interactions which produce mental experience. We do not know how big a neuronal system must be before it can sustain the critical reactions, nor whether the critical reactions depend exclusively upon the properties of neurons or only require a particular organization of energy and matter”.

Moreover, the Romanes came to a conclusion that-

“In sum, consciousness appears to be the major way in which the central nervous system adapts to the novel, challenging and informative events in the world.”

Thomas Huxley cited the observation that-

“The doctrine of continuity is too well established for it to be permissible to me to suppose that any complex natural phenomenon comes into existence suddenly, and without being preceded by simpler modifications; and very strong arguments would be needed to prove that such complex phenomena as those of consciousness, first make their appearance in man.”

William Uttal observed that-

“There is an a priori requirement that some substantial portion, perhaps a majority, of the synapses that occur at the terminals of the myriad synaptic contacts of the three-dimensional (neural) lattice must be inhibitory. Otherwise, the system would be in a constant state of universal excitement after the very first input signal, and no coherent adaptive response to complex stimuli would be possible.”

Consciousness in the Physical World

Humans own their physical bodies to live in this world and take advantage of the physical technologies. This is why humans are categorized as conscious beings with the sense of feeling, desires, thoughts, emotions, and imaginations. 

Consciousness appears solely attached to the physical world since it links our minds and the earth we live in. It depends on the complicated bio-physical structure, including the human nervous system. 

There are various competing concepts on the dynamics, nature, and structure of human consciousness, particularly the related activities between the brain and consciousness. 

Human consciousness is a complex phenomenon. In consciousness, we experience a frequent variety of different thoughts, desires, emotions, and feelings. This also includes a consciousness of our body and its movement, imaginations, intentions, acts of will, future prediction, and memories of the past. Therefore, consciousness is a highly complicated entity that requires thorough research.

There is both unity and diversity within human consciousness. The dynamic and integrative existence of human experience is referred to as “holistic consciousness.” Consciousness is dynamic with many distinctive components of “holistic consciousness. 

Moreover, each new level of increased pattern appears to introduce a more complex evolution and change mechanism. Thus it generates an “evolution of evolution” and lubricates the rate of evolution.

In the end, creativity pervades the current evolutionary process and particularly emerges the new level of complexity. The present creation is a significant measure of evolution. Consciousness is rooted and evolved within this evolutionary physical world. 

However, two distinct views on future consciousness are stated below,

● “Consciousness was present within the physical universe from its beginning, even if in an ill-defined form.”

● “Primordial consciousness only emerged within physical systems after these systems had achieved some necessary level of complexity (e.g., the formation of a primitive nervous system in animals) and after that time began to further increase in complexity.”

Both the views put consciousness in biological evolution, as evolving in complexity besides time. 

Persistent Evolution and Past and Future Consciousness

Evolution is not only a process occurring across subsequent generations but also an evolutionary process. There is a transformative and dynamic quality of human consciousness. This continuing dynamic quality displays an evolutionary pattern and directions throughout an individual’s life. In evolution, the lifespan of every conscious mind is an analysis of the creature’s growth.

The evolutionary measurements of an individual conscious mind are more noticeable as the “underlying physical system across evolutionary stages,” which becomes more complex. Human consciousness shows remarkable development throughout its life span with regards to stability, survival, and preservation. Besides time, the complexity of the human mind has increased and also the maturity. 

The expansion of past and future consciousness is one measure of the total evolutionary trajectory within living organisms towards expanding consciousness in both time and space. The present evolutionary process (which is mirrored or recapitulated in individual minds’ psychological development from birth to maturity) enhances historical, global, cosmic, and ecological consciousness. However, the primary measurement of wisdom is an expansion of knowledge and awareness. 

Society and the Self

Humans’ conscious mind has been the ongoing development of a core of experience and the integrative coordinator. The conscious human self is self evaluatory. It coordinates with the various angles of success with meaningful human thoughts and actions. Thereby, it implies that humans learn from experience, emotions, ideas, and actions. 

Moreover, a person self engages in self-talk, providing a continuing story to elucidate and guide future development. The self is the audience, narrator, and the main character in the self-reflexive story, which narrates the self about itself. Thus, it can be noticed that the human self is meaningfully evolving, pushing itself up by “its bootstraps.” The self has obtained progressively with better clarity, self-control, and self-awareness than the evolution in humans’ history. 

Therefore, the true nature evolution is self instead of “no man is an island.” Individual conscious minds and society form an interchanging evolution. On one end, the continuing evolution of an individual’s conscious mind requires a “social interaction and shaping.” And on the other hand, individual conscious minds “facilitate social-culture evolution.”

In both cases, the evolutionary process takes place under the guidance of future consciousness and the intent to develop the personal, intellectual and mutual evaluation. 

Others educated every human; attempting to clone others’ identity; presents ourselves within the evaluation’s social ground. We showcase ourselves and voice our beliefs on social grounds parallel to genetic variations populating an ecosystem. 

A common argument is, the individual self is a hindrance to the future evolution of consciousness and human society. A moral factor of the future evolution of consciousness and human society reflects the capacity of purposeful evolution. “We do not need to transcend our egos; we need to strengthen them.”

Theory of Singularity

There has been evolution after evolution, hence increasing the complexity. This can be stated as “singularity,” where the succeeding evolutionary standard is incoherent related to the previous standard. 

Humanity and its descendants are inserted within the evolutionary process and pass through one or more successive “singularities” in the future. The self-conscious humans will be absolute through our actions of evolution. “We are a journey, rather than a destination”- is a chapter in the evolutionary saga rather than its culmination. 

In sci-fi, the creation of wise and conscious devices has been a familiar genre. According to the present researches, in between a few decades, humans will generate computers possessing memory and processing tendencies far beyond the human brain through installing human minds or artificial intelligence at a human level. We humans will be able to create non-biological conscious minds. 

The two primary concerns regarding future consciousness need to be addressed. Creating an intelligent, conscious mind seems to possess a sound theory of consciousness and its fit in the physical world. Secondly, artificial intelligence theories intend to overemphasize the human mind’s coherent dimensions to exclude feelings, value, motivation, and personal identity. 

A variant and broad concept of the conscious mind needs to comprehensively enclose the human mind’s complex and rich character. However, efforts to address features of the puzzled consciousness are a mystery of the conscious human mind.

Future Evolution of Consciousness

Human evolution will result from purposeful evolution, governed by our highly evolved tendencies of future consciousness. It is a moral issue for humans’ future and will be the ongoing articulation of better evolution methods.

Like in the past, we kept working on debating, revising, and improving upon our answers. What will then be preferred for future consciousness? Consciousness should be physiologically holistic, and our views of preferred ways for our evolution should be holistic as well.

The basic standards of determining human evolution’s preferred directions must be consonant with our understanding of the introductory human psychology, human condition, consciousness, and total nature of reality. A static view of an ideal human is ultimately counterproductive and misleading.

It is the nature of our self-consciousness that we purposefully guide and maximize the evolutionary process with skills. Therefore, we should not only intend towards developing in the direction of evolution. But we should also intend to progressively increase our tendencies to improve the physiological process of human development. 

According to psychological research, when humans are developing, they experience long-standing happiness. When they are controlled in their life, they feel greater joy and are more productive in generating positive psychological states. These states of development are further realization and improvement of the good life and direct towards happiness.

However, the enhancement of future consciousness is integral to developing and tends to grow at even higher levels in the future. Future consciousness is the “psychologically holistic guidance system” for humans that directs future evolution. Moreover, it identifies the growing states of development as the ideal pathway for our present state of conscious evolution. The future consciousness is the core tendency to realize this direction. 

Education and Self-Evolution

Education has served its purpose in developing human consciousness. Formal education provides a variety of functions in personal development. The meaningful development of knowledge and wisdom would help to better human life, both collectively and individually. 

Our present education system has sadly come under administrators’ hands and those who deny evolution as a general model of human and natural reality. Inside such an environment, our teaching system has been compromised. On the brighter side, these measures will fail. It is wiser to understand our role in society and ethically follow the process rather than refuse it. 

Self-evolution and understanding of how evolution takes place in humans are the primary development in the young generation. Education should centrally serve human nature and cosmic reality as evolutionary measures and influence intentionally and unintentionally throughout human history. 

Time Malleability

Time is abstract; i.e., it only exists in our allied minds. According to Carlo Rovelli, a theoretical physicist, time is an illusion. Moreover, he profoundly states that “reality is a complex network of events on which we project sequences of past, present, and future.” 

The formation of our brain memos is dependent on the varied arrangements of the digital world. This helps memory to develop with the use of technology on a broader prospect. 

Narrating stories has influenced the way we conceptualize memory and time. It trained the humans to imagine historical places, the possibilities of the future, and potential words. Because of evolution, the human minds are diverse from that of our forefathers of the skills of the brain’s memorizing power and the capacity to withhold consciousness.

The traditional theory of memory and time omits experience and application of the awareness of the time. 

The main aim is how to tackle time-consciousness. Therefore, it will change the fundamentals in the constitution of memory as we come across it in the digital age. 

Materialist and Idealist Concepts

J.M.E. McTaggart defines time as the composition by how we have lived, how we are living and how we will live in the future. Not only, it contains memory, prediction, and impression as coherent methods and, but also, the time structure after or before relations. He concludes that ultimately time is unreal. It means time is always present within the human experience.

However, the theory to ascertain time is ubiquitous, according to the continental tradition of time theory. According to Kantian theory, time is a phenomenon which is not materialized.” It says that time has no relation with matter. Instead, it is known as an instant co-relation without any outside correlation. 

Edmund Husserl, the phenomenological theory founder, described the constitution of time within the context in deeper insight. He interprets the progression of time consciousness besides the theory of intentionality. Time is neither a substance nor an object which can be touched. Time is an intuition, i.e., the procedure of forming up the world. Without time we cannot devise a substance. Hence, time is a necessary factor to sustain in this universe.

Memories existence and time is based on subjectivity. According to McTaggart’s research, memory can never be a subject, given that he refers to time from a merely diagnostic vision. According to Mellor, time is a concept and not a subject. 

Time: As a Substantial Phenomenon

Mark B.N. Hansen developed a notion of substantial phenomenon that time impacts the notion which says time is described by change and can reverse it by subconscious procedure of temporary material. He creates the phenomena of time with the help of new forms of communication medium and its interpreting character. 

In phenomenology, time-consciousness and time experience are the primary questions. However, time is never showcased as an object, as one cannot physically mark its presence. Thereby, time has its own pattern of appearance. 

Time-consciousness is analyzed with an object. However, every pattern of the time is mentioned as a temporary character. According to Hansen, digital media specifies a temporary construction that goes further than eternal human abilities. It also alters human methods of remembering time.

Hansen’s theory was media pluralistic in a sense. It consists of the entire variety of media forms, stating that there is “time does not exist by itself.” The main reason behind Hansen’s stressing upon the complex nature of the media by the arguments which are aimed against time’s subjectivity. 

Hansen aggregates the views of a “priority” with the theory of the awareness of the time. He comes to the conclusion of his deliberations of the “ontological foundation” of concept of time that comes in between “no substantial relation” to the hypothesis of the concept of time. He also states time-consciousness as an “personified awareness.” In the 2004 explanation of Douglas Gordon and Bill Viola’s media artworks, he states these time-consciousness questions. 

Stiegler’s theory emphasized the retention theory of time-consciousness. He introduces a third type of recollection of the past along with retention and memory. Here retention is the immediate past in mind at the present moment, and memory is the active reproduction of past events. Both are alternative modes of consciousness. 

The theory of tertiary memory does not define a mode of consciousness rather the externally stored past. Stiegler proposed a “structural coupling between media technology and consciousness.” The factors on time-consciousness are structured by external intervals and are invaded by technological constituted temporary memories and patterns. 

Further, Hansen criticizes Stiegler’s concepts. Hansen states that Stiegler informed time-consciousness as a “universal model of perception.” As Hansen figured the temporary object of the melody to describe time-consciousness. Hansen further concludes the effect of tertiary memory. According to Hansen, Stiegler neglects the critical part of the embodied approach. 

Hansen concludes in the matter of digital art that time is not according to the human paradigm. He further concludes that-

“By the way, it’s (Empire 24/7, Y.F.) constitutive hybridity – it’s capacity to present what is (normally) unpresentable, to aestheticize and mediate what remains beyond aesthetics and media – Empire 24/7 manages to capture and to express our cultural passage to a new kind of temporal reality, one in which human time-consciousness has been marginalized, or perhaps more accurately, in which the functioning of time-consciousness and the functioning of technical inscription of time have parted ways. By presenting to spectatorial consciousness what normally remains below its perceptual threshold – the technical artifactualization of the minimal before-after structure of time – Staehle’s work thus demonstrates how extensively processes of temporalization, including those of human temporalization, depend on post-mediatic technics”.

Hansen defines media as a necessary trace of human cognition from which it originates. Even after the technical grounds have changed to a hyper-complex structure that surpasses the simple user-device relation, there is a relation with subjectivity in the broader prospect. The theory of time needs to be integrated with the intertwining of subjective and objective structures and not by strengthening their opposition. 

Digital Amnesia

Amnesia is a state of loss of memory. Moreover, Digital Amnesia refers to the loss of information that people forget to retrieve from a digital device. Many researchers say, when we store something externally, we encourage our minds to erase it. Thereby, if we do not recollect memories, it fades with time. 

Human memory’s uncertainty is well known to psychologists. Even today, we frequently underestimate the capacity of the mind to diddle. We are indeed completely dependent on digital media for our everyday life. Besides the rapid development of the digital world, no one wants to struggle for simple things. No one bothers remembering phone numbers, important days, and dates anymore, as they are just a phone away to collect all the essential facts. Smartphones have completely altered our lives. They connect us with people worldwide and provide us information at our fingertips. However, excessive use of digital media can also cause permanent memory loss in humans. 

Nancy Dennis, Penn State associate professor of psychology, said, “Without a doubt, technology has transformed our lives and has also seemingly altered the way our brains work. However, that’s not necessarily a bad thing.”

Paul Kletchka, system and network security analyst in Penn State’s Office of Information Security, said, “Through such security measures as incorporating strong passwords, keeping operating systems and apps up to date with the latest versions, and being careful of what’s downloaded, users will be able to secure and protect the information no longer stored in their minds.”

Smartphones and Teenagers

The increased exposure to smartphones has had a negative impact on the mind of teenagers as it results in the decreasing of memorizing power. It might be the reason why our ancestors were able to memorize vast things that our generation lacks.

Distraction is one of the reasons for memory formation. We are much focused on grabbing knowledge and information at a time but hardly focusing on one thing. This is the reason we lack skill in a single task.

Humans are hungry for information. They want more and more information at a time. Smartphones provide vast knowledge, but the human brain’s exposure to smartphones’ radiations is unpleasant. 

According to the Swiss Tropical and Public Health Institute (Swiss TPH) researchers, the brain’s right hemisphere helps to figure out images, patterns, and shapes, and hence, teenagers who hold their phones to their right ears are most affected. 

Mike McNeese, former senior associate dean of Penn State’s College of Information Sciences and Technology and director of the Multi-disciplinary Initiatives and Naturalistic Decision Systems Lab, said, “In today’s society, where we have Twitter, Facebook, and other social networking technologies, the memory becomes a social cognitive phenomenon in which our technological devices allow us to be more highly coupled with friends, family, and colleagues. As a result, we engage in information processing in ways we didn’t have before the advent of cellphones. Through interaction with others, we exercise our brains, and those memories have more meaning and become constructed and encoded in our minds.”

Consuming Human Memory

According to many types of research, the use of digital devices is weakening human memory power. People use search engines instead of memorizing information. 

For example- In earlier days, people used to remember hundreds of phone numbers as they had to dial in a telephone. But smartphones have done the work easier indeed! 

Maria Wimber from the University of Birmingham said, “The trend of looking up information prevents the build-up of long-term memories.” 

According to a study of the memory habits of 6,000 adults in Italy, France, Spain, UK, Germany, Belgium, Netherlands, and Luxembourg, it was found that more than one-third would turn first to computers to recall information. 

Dr. Wimber said, “Our brain appears to strengthen a memory each time we recall it, and at the same time forget irrelevant memories that are distracting us.” According to her, remembering information is a very efficient method to create a permanent memory. 

According to the study of Kaspersky Lab, a cybersecurity firm, people have become habitual to using digital devices as an alternative to their brains. People forget important information because it can be quickly gained from a digital source in an instance. 

The studies also enlighten how storing personal information in digital mode has become a trend. One can easily get his/her friend’s birthday by checking his social media profile or setting a reminder. It may be handy, but this has made humans indolent.

Lack of functioning of the brain may be critical at times. It will only function when people use their brains and memory to remember things rather than digital devices. 

Dr. Wimber also said that “There also seems to be a risk that the constant recording of information on digital devices makes us less likely to commit this information to long-term memory, and might even distract us from properly encoding an event as it happens.”

Shaw says, “By having social media dictate which experiences count as the most meaningful in our lives, it is potentially culling the memories that are considered less shareable. Simultaneously it is reinforcing the memories collectively chosen as the most likable, potentially making some memories seem more meaningful and memorable than they originally were.” 

Digital Evolution

Digital evolution has both boons and bans. It has helped people be free from the burden of holding too much information in their brains, as too much information may also hamper the brain’s development. However, as discussed above, everything comes with both positive and negative values of its own. 

90% of the human population relies on digital sources for information. Digital Amnesia is making it possible for people to free their mind space and think of more creativity. People are more updated about the day-to-day happenings of the world they live in. 

One must know to use the information he wants. Social Media has good as well as bad news to share. The bad news spreads more rapidly than the good ones, as people love to exaggerate. Distinguishing the good and the bad is what makes humans special.

The younger generations are more concerned about digital media technologies than the older ones. They are more interested in getting exposure to the digital world.

On average, 42% have admitted to feeling concerned about their dependence on digital media to store information. Moreover, the older generation is better at understanding what information is stored on their devices and efficiently manage it. They are most likely to obtain information from their device regularly. 

However, the rise of Digital Amnesia put forward fears and stress as well as excitement. If its effect continues to evolve, people will preserve their memories in their digital sources. 

Dr. Kathryn Mills, UCL Institute of Cognitive Neuroscience, University College London, said, “Reliance on digital devices, and the trust we place in them, can resemble a human relationship. The feelings are established in the same way through the experience. Repeated experience with a reliable individual builds a schematic or association for that individual in our memory, telling us that this person can depend. If a digital device is continually reliable, then we will build that into our schema of that device.” 

The study results also showed that the data stored in the digital devices would cause stress, especially among younger ones and women. 

Online Information

Information is a broader area of knowledge. Humans are more inclined towards gaining more and more knowledge. The online mode of our life has taken us to the extreme height of information. However, while using an online mode of collecting information, one must be aware of social media pirates.  

People, while using the internet, must take some precautions. IT crimes are frequently occurring these days. Hackers are waiting eagerly for people to trap. The digital world is therefore responsible for the making of hackers and the people who are trapped.

It is also true that online information has changed the scenario of human living, making it simpler. One can easily get sufficient information just by sitting at home. During the pandemic days, the digital world has helped people a lot. Moreover, the majority of the task was done online. It would have cost people a lot of digital media did not exist.

Digital devices served our life, but they have also had negative impacts. Many people underestimate the bitter consequence of digital amnesia. But its adverse effects are unexpected and leave us in a genuine threat. 

Digital amnesia is a growing trend among the generations. Therefore, people need to have deep research about this trend to protect the information that we do not store in our minds. 

Overcome Digital Amnesia

Interestingly, the term Digital Amnesia did not exist before 2015. It was after Kaspersky Lab’s survey that the term came into existence. He questioned over 6,000 customers and witnessed the direct linkbetween data availability and the failure to recognize it.

After this study, the term Digital Amnesia was proposed,’ and it has gained a lot of popularity in the past four to five years.

According to Better India, a Mumbai-based psychotherapist Binaifer Sahukar, “Our dependence on smartphones is increasing rapidly. A simple example being that everyday routes to college and work need to be looked upon in Google Maps. This is quite dangerous because people keep their eyes off the road for a considerable period. There was a time when people remembered 8- and 10-digit phone numbers; today, it’s a task to remember one’s number.”

Things that can cure Digital Amnesia and improve your memory power-

● One can write things down to avoid forgetting them. While reading, one ignores the boring parts and only focuses on the highlighted words or sentences. Write your passwords, dates, and other essential things in a notebook.

● Try to solve your problems or questions before switching on to google. Write the mathematical formulas in a formula copy, or you can refer directly to the formula book.

● You can set alarms and reminders for essential works, but it is advised to execute them before time. 

● You can put your phone on digital mode overnight. It will prevent you from checking the phone constantly. You will be able to concentrate on whatever task you are executing.

● You can type the phone numbers instead of referring to the contacts section on your phone. You can also maintain a telephone directory for collecting essential numbers. 

● Keep yourself engaged in some work. Join co-curricular activities and learn new things. It will help your mind to focus on one task at a time.

● Try not to meet your friends in video calling. Instead, try to meet them physically once they are free.

● Avoid being engrossed with online games, instead go out to play outdoor games. This will help you to stay physically fit as well.

● Try to spend more time with your family and close friends. The older generations at your home may need someone to talk about their loneliness. 

Recalling Memories May Make Us Forget

The idea that the very act of remembering can cause forgetting is surprising and could tell us more about selective memory and self-deception.” __ Michael Anderson

Why do we forget? What makes us forget? Is it necessary to put up with the inconveniences of our memory failures?

Before getting into it, let’s get into a short but exciting story.

Richard Morris, the 2016 Brain Award Winner, shared a little story.

Richard Morris lives in Edinburgh in Scotland, and he had to go on a business trip to London. So he left for the trip in the middle of the winter. He took a train and headed for his destination. He depicted that it was scorching hot in the metro, so he took off his winter coat and peaceably waited for the station to come.

When he reached Oxford Circus, he got off the train as he had to attend the meeting over there. Just then, he realized that he left his winter coat on the train. He then went to the station master and narrated the whole incident that happened to him.

The station master, after contacting someone, conveyed a message to Richard Morris that a kind member of the public has handed over the coat to the station authorities.

Richard Morris rushed to take off a train, went to the next station, and reached the station office.

By then, he realized that he forgot his briefcase in the previous station.

So he had to explain to the station authorities and pleaded with them to phone back the Oxford Circus station authorities to know about his briefcase.

Then he rushed back to Oxford Circus, groping his winter coat so tight, and finally got reunited with his briefcase.

What does one understand after going through the story mentioned above?

Isn’t it annoying for the fact that we forget something?

It is most embarrassing.

But to put forward the fact that Forgetting is the perfectly-known part of memory.

After the scientific research, we learned that the Brain contains 100 billion neurons and 10 million billion connections that help us memorize everything we see, hear, smell, taste, and touch.

What Happens when we memorize something?

When we try to learn or memorize something, there is a small organ called Hippocampus. It’s just inside the middle of the temporal lobe, and it’s the most critical structure which plays a significant role in learning and memory.

So when we try to learn to memorize or understand something, the hippocampus data are transferred to the various sensory system and then passes through this sensory system to a single area of the brain, which somehow builds up the connections. Then it is stored as Memory.

What makes us forget?

The research found that approximately 56% of memory is forgotten within an hour, 66% within a day, and 75% within six days.

The real reason behind Forgetting is our brain has tremendous capabilities of storing, but its capacity of storing is limited.

There might be multiple reasons.

Decay Theory

According to this theory, when a memory is created, it is stored in a new form, and over time these memory traces tend to fade or disappear. If the memory is not retrieved, then eventually, it is lost.

Suppose in old age people tend to forget certain things. It’s because of this Decay Theory. Memory traces are faded, and it is not easy to retrieve them.

Fail in Storing Memories

Sometimes it’s not about forgetting; there are times when there are failures in stories, which eventually leads to not forming connections or storage.

Suppose when we are absent-minded in a lecture, we fail to recollect whatever has been said or taught.

Motivation to Forget

There are times when we tend to work actively to forget certain things, especially when there are some depressing, traumatic, and bad Memories.

Suppose we go through a phase where we were in depression or anxiety, and when we finally recover from that phase, we tend to work actively to forget all the incidents because those incidents may cause pain, fear, etc., again.

Interference Theory

It is simpler to get along with the recent activities than to remember long-term memories.

‘Transience’ is defined as the deterioration of a portion of your memory over a period of time.

In this Interference Theory, Transience occurs due to the reminiscences that obstruct among each other and interferes with recalling other specific Memories.

Positive and Negative Interference can cause us to forget memories permanently because they impact how good we can recognize old reminiscences.

Proactive Interference is the phenomenon where old Memories interfere or hinder the formation of new memories and eventually inhibit the capability to know the new memories. Subsequently, we lose the memories.

Retroactive Interference occurs because new memories tend to replace the old memories or old memoirs are replaced by the new ones; sometimes, it tends to drastically change the original memory fades and becomes impossible to retrieve.

This is when the new acquired memory hinders or impeded the recalling of the old ones.

This is the reason why Recalling Memories makes us Forget.

 Cue-reliant Forgetting

Sometimes when a particular incident gets stock up in our brain, we tend to store it based on emotional state.

For example, if we tend to fight with aggression with someone and later on when our aggression is lowered, we forget the fight.

Sometimes if that kind of emotional state isn’t created often, we tend to lose all the memories associated with the emotion and eventually fail to retrieve it.

There are other kinds of forgetting as well, such as Absent-mindedness, Amnesia, Blocking, etc.

While forgetting cannot be avoided, one should understand the reasons behind it. There are many reasons for forgetting. In some cases, many factors, it is extremely difficult to recall a piece of information. Understanding the factors that influence forgetting makes it easier to improve memory.

Deep Fakes and Memory Malleability

Have you heard the term Deep Fakes and wondered what it is?

Well, Deep Fakes uses a form of artificial intelligence called deep learning to make images and audios of fake events.

In recent times, Deep Fakes are the worst concerns within the entire society. These days collection of compelling images, audio, and video through virtual medium is easy and cheap. This easily and cheaply generated audio, video and pictures have severe repercussions within Political, Law, Security, Privacy, and across the entire society.

Looking at the current grim situation, numerous several technologies have been built up that aims to develop tools to differentiate between actual or original audio/images with fake ones.

These tools will aid the audience, mainly when they are vulnerable. However, recent research shows that deep fakes can be created to provide credible representation, but they have the power to frame false memories.

What pops into the mind when we think of memory? 

Whether it is a fond recollection of lovable pets or what we had in as our breakfast or lunch, our memories are based on what we know and what we care about.

The added information and beliefs can occupy memories and perhaps even permanently change or alter them. When the added suggestive information is given to us, we believe it to be accurate or consider it to be facts, and the brain fills in the gap with things that match the particular belief. This is called Memory Malleability.

For example, in a research conducted by notable memory researcher Elizabeth Loftus, several participants were adverted for a Bugs Bunny feature in Disneyland. Eventually, they were asked if they met Bugs Bunny and shook hands with them on their trip to Disneyland.

Even though Bugs Bunny is a Warner Brothers character and it will not be found in Disneyland, a significant proportion of participants recorded to have met Bugs Bunny.

This misleading fake poster had been enough to trick the participants.

Memory Malleability has been around for some time, but it relied on photographs and texts to generate fraudulent recollections.

These recollected memories use our intellectual miserliness, which favors selecting those recalled memories that provoke the virtual world.

Even smart people can fall into the trap when false. However, these memories strike us only when we are in a vulnerable state of mind.

How can one guard the memory against intrusion, and why is it very easy to infiltrate fake information into the brain?

It is interesting to know that human memory is very fragile during the initial stages of its development. There are 100 million brain cells called Neurons responsible for forming memory connections to form memory traces.

A memory may constitute many different kinds of perceptual information, such as a various sensory system that depends upon the activation of neurons from the other brain areas.

This information is stored in the Hippocampus, which is the most critical structure present in the medium temporal lobe, which is responsible for the creation of Connections, and these connections are transferred to the sensory system to a single area that forms brain traces.

Here, memories undergo a process of amalgamation in which these connections are strengthened, and it becomes more stable.

While the traces of memory is first embedded in Medium Temporal Lobe, this theory suggests that the long-term memory storage depends upon the anatomical relocation of memory patterns to the outer cortex, where they remain perhaps for many decades.

Faking a piece of content is not a new concept. However, deep fake methods easily manipulate video, audio, or images with a high potential to deceive.

Deep Fakes mechanism goes through a full process where lots of training and learning are required to form a network of digital deep fake architectures that influence the virtual audience negatively.

Let’s learn more about it via a fictional short story.

A man named Charlie was honored with the Oscar as the best film director. He was filled with nerves, pride, and other mixed emotions.

As his name was announced, he went up to get the award and also to give a good speech. He commenced his speech by appreciating his parents, wife, and daughter. He then paused for a while because he was out of words. He forgot everything that he planned to say.

He then thanked the entire crew, his collaborator, and best friend, Nick.

He then raised his award and concluded his speech.

Charlie, without knowing the fact that he had deceived the audience and millions of viewers on TV. He deceived as he made up a fake story to enhance the speech.

This is an example of Deep Fakes.

Several Applications of Deep Fakes


Deep fakes can be used to generate blackmail materials to implicate the victim falsely. Howbeit, since the fake content cannot reliably be differentiated from genuine materials, victims of actual blackmail can now claim that the authentic mementos are fakes, granting them credible deniability.


Many deep fakes on the internet feature pornography of people, primarily female celebrities. The most disturbing fact is that their consent is not taken while uploading the pictures or videos. Deepfake pornography first came out on the internet during the latter half of 2017, mainly on Reddit. According to a report of Dutch cybersecurity startup Deeptrace, 96% of all deep fakes are obscene and related to porn.


Deep fakes are used to misrepresent well-known politicians in videos, images, and as well as audios. Especially through entertainment, Deep Fakes have established loads of fake facts about politics. 

 Recently, the FBI has warned of the rise of deep fakes in the coming months and explain how to spot the deep fakes.

● The FBI has given a strict warning saying that “malicious actors will certainly influence false content with foreign influence for the next two and half years.

● The FBI has also pointed to an increase in the number of fake journalists and articles circulating online. While these journalists had a “powerful online appearance,” their fraudulence can be uncovered by “basic fact-checks.”

How to Spot Deep Fakes?

FBI, in its statement, has also given information on how to spot deep fakes. Too much space between the subject’s eyes and head and torso movements and issues of synchronization between face and lip movements are the main points to consider while looking out for deep fakes.

The University of Buffalo has produced a mechanism for spotting Deep Fakes. It is also claiming the device to be 94% effective with lower success rates in non-portrait pictures.

The rising rates of Deep Fakes have created havoc in the minds of society. To be vigilant and aware is the need of the hour. We need to be sharp enough to Spot a Deep Fake and tackle it mindfully.

IDENTITY: Digital Dualism, cyborg-ism and the crisis of representation

“Digital-dualism” is relied upon the belief that the real and virtual realities are largely separate and distinct.

Digital Dualists’ perspective is that digital content is very different from virtual reality to the real-world found in physical space.

The term “Digital Dualism” was termed by the founder of Cyborgology Blog Nathan Jugerson in 2011.

In today’s era, the idea of Digital Dualism is becoming highly unpopular because of the involvement of prominent individuals in Social media.

Rather than keep the real and virtual worlds separate, social media platforms such as Twitter, LinkedIn, etc., are often being involved in connecting people online and offline as well.

For example, when people form networking online, they plan to meet virtual friends offline to strengthen the connection.

Technology is established so that it has become difficult to separate natural and virtual realities—they kind of overlap with each other.

As a result, people accept what they see in virtual and tend to believe it but little did they know that it is a whole separate thing and far away from reality. In this case, Virtual World is considered to be a reflection of the real world.

For example:-

Virtual Reality (VR) in Military

The military personnel uses VR for Virtual simulation, flight simulation, battlefield simulation, etc.

VR in the Military is also used to treat Post Traumatic Stress Disorder for soldiers who had returned from combat or the battlefield.

Virtual Reality (VR) in Sports

Coaches can use virtual Reality, players to train themselves properly and more efficiently across a range of sports as they can experience, see certain situations, work on them and improve them.

Virtual Reality (VR) in Mental Health

Patients with Depression and Anxiety, when treated using VR, find it the most effective way to manage stress and cope with it.

They concluded that meditating via VR proves more effective in recovering from Depression or Anxiety.

Virtual Reality (VR) in Medical Training

As there is much interaction in VR, most dental and medical students have begun using VR in different kinds of Surgery, treatment, and procedures allowing for an accessible learning environment.

Virtual Reality (VR) in Education

Virtual Reality has proven to be effective for students with Autism.

It has also provided a perfect learning environment for students to interact. Now the students can be taken on virtual field trips to Planetarium, Museum, or even look into the solar system.

These were several Applications of Virtual Reality in various fields.

This is why Nathan Jugerson developed the term “Digital Dualism” to argue that it is a misconception.

“Cyborg-ism or it’s also known as Cybernetic Organism or “Cyborg.”

An organism with a mixture of biological and technical aspects term a Cybernetic Organism. Some definitions also convey it to be a fictional and hypothetical invention/creation.

However, in a technical sense, humans can also come under the category of “cyborg” basing on various situations, including artificial intelligence implants.

The word “cyborg” actually means the way we human keep in close touch with various technologies, be it an attachment with the most straightforward technology.

We can term a human a Cyborg if he/she has some artificial implants such as pacemakers, artificial heart valves, insulin pumps, cochlear implants, etc., fitted in his body.

A person can also be termed a Cyborg if they are in contact with specific wearable or usable technologies such as Laptops, Mobile Phones, Computers, Google Glasses, etc., that will be able to achieve the task.

In this recent Covid Pandemic, every human being has been evolved as a Cyborg. Be it be being in Laptop in their Work from hours or being in Mobile phones to end their boredom.

However, there is another meaning of a Cyborg if there is the involvement of fictional images of human beings with improved virtual-reality power, robotic mix on limbs and body, and other important human body structures with IT components. It is also known as “Cyborg.”

In “Cyborg Manifesto,” Donna Harway teaches us that we all should think of the world as a network of machines and human beings, interposed by rambling and semantic rules.

It is now essential for us to contemplate our whereabouts in this involvement.

How is our own identity formed and performed in the light of the new spatial dynamics offered to us by the internet?

According to the Algorithmic Inequalities and Equalities, we contemplate digital identities in three ways: philosophical, societal, and ethical. Philosophically, we question the concept of “digital-dualism” and imagined the physical (absolute) dissociation from the virtual.

Where does the Human Body start and end?

How does our perspective on ‘authentic’ get affected by the approach of what we think as “real” and what we believe as “virtual”?

Why aren’t we able to differentiate between Real and Virtual Realities?

While our identities are pursued, commissariat, cataloged in regular time intervals, we discuss how digital footprints add newer, exciting features to the historicity of self.

The algorithmic subject changes very often or is said to be constantly changing.

For example, a person’s identity on Facebook may include a different custodian practice from his identity on Instagram, yet code-switching is not always restricted.

Code-switching allows differently disadvantaged people in a society to exist, connect and dodge censorship measures.

An instance was published in the Newyorker Comic in 1993: “On the Internet, no one knows you’re a dog.” It’s a ballad to the joys and problems of having an unnamed nonmonolithic identity.

It’s a need of the hour that to differentiate between real and virtual realities. We need to be vigilant and conscious about it. We have to distinguish between Real and Virtual Realities, even if in small instances.

Remembering and

Forgetting In The Digital Age

More and more information is being fabricated and consumed in the digital form in today’s modern society. The digital platform has caused massive disruption across multiple encephala, as it has led to the speeding up of information sharing and dynamism in the post-modern society, where such information is readily available at devices to be called up at any moment we carry on our pockets. The garner of such digital information has become so more accessible and cheaper that people don’t strain themselves in remembering things manually rather than digitally. People nowadays are too much relegated with the large-scale computing centers, SIM cards, and now to nebulous clouds that are seemingly accessible at any location. But let’s compare the modern digital era with those of the blogs written in plague paper documents than those of the digital information that is deceptively vulnerable. Continuous curation is required to preserve its availability. So the aspects of the current digital data ecosphere which we inhabit have made a gradual imbalance between remembering and forgetting.

The imbalance has triggered a debate within the information law community: first concerning the concept of “right to be forgotten” about the personal data retrieved by search engines and how such right should become an explicit element of data protection law. Amid the debate, the equivalent contribution of Sir Viktor Mayer Schoenberger’s book ‘delete’ appeared, furnishing an eloquent contribution to the discussion and appeal for the technical implementation of such right. Then came the European Court of Justice’s decision, which effectively confirmed the existence of such a right. On the heels of the recent enactment decision of the General Data Protection Regulation by the Eu Parliament, which explicitly acknowledged and expanded upon its right?

For quite much time before, remembering and forgetting in individuals had become a subject of interest in medical science and psychology were the insights on the research on human encephalon and its use of technology advances in the representation technology. Medical science has identified the fact of forgetting and remembering things. After a long duration of research and a deep study of human encephalon it has been noticed that due to the improvisation of the modern ace of technology people has left the job of pressurizing their brains to remember things when it is readily available on the internet and due to this unhealthy imbalance, people tend to forget things faster.

In yet another context, it has been noticed that the human encephalon is not only seeking increased attention on social media, but on the other hand, it’s affecting the internal storage system, which on the contrary, does not let us remember things in order. Sometimes we tend to forget certain things. The advancement in technology has not only poured us with a bunch of knowledge outsourcing and made us advance in all the spheres of life, but also it affected us by generating certain diseases like a brain tumor. Therefore what makes us lazy is nothing but the great growth of technology in the past few decades.

Technology is making people forget things, the experts who advised on digital amnesia reports highlight how a failure to make use of memories. For example, preferring to search online can ultimately result in dilution or disappearance of those memories. Our brain works at a practice. Every time we call a certain memory, it gets strong in our brain, and as we tend to forget an incident, our brain also forgets the memory. Past research has repeatedly demonstrated that actively remembering information is an efficient way to create a permanent memory. In contrast, passively repeating information (e.g., frequently looking it up on the internet) does not make a solid, lasting memory trace in the same way. Therefore, from all the researches, it is a proven fact that not remembering even before recalling a memory affects it.

Technology has made our life smooth. Technology has built so that we tend to get attracted towards the modern world of technology, where technology is making things so easier and user friendly. Technology is showering magic on every person’s life without which they can’t even imagine their world technology.

Technology has been a cause of the memory’s disappearance because in today’s modern technology world if we tend to forget a small thing, technology hinders our remembering those forgotten tales. The rapid change in the current technology has helped people in several fields, be it standing in a queue for some banking transactions or is its advancement in medical science, everywhere technology has taken its way to lead people on their path to success.

Let’s take the example of advancement in the field of electricity. If we give a quick recall, we can reminisce about the first way of inventing electricity invented by none other than the great scientist of 18th century Sir Benjamin Franklin. Then we came across electricity. With the rapid advancement in technology, electricity has taken a new root towards improvement and is now more easily accessible and handy. Similarly, if we take the example of internet banking, where people can easily have access to their bank accounts and can extricate themselves from fraud, the critical part of this internet banking is people don’t have to wait in an everlasting que and can easily access their bank accounts from any place even while traveling. This is the magic of technology where people can easily access their subject matter without remembering things.

But the drawback of this digital age is drastically affecting the human encephalon. Nowadays, in modern technology, the manual function one by the human encephalon degrades as the technology hinders the manual toiling. So, technology is making our lives so much technical, that’s why we tend to become lazy and pour all our work towards technology. 

+T security can be the early casualty of our impatience to access information online. It has been found that 18 percent to 22 percent of people aged up to 24 opts for speed by denying protection. Therefore, it is opening the gateway for malicious software to corrupt all our personal and professional data.

Sometimes we are so much engrossed with overspeeding that we tend to forget the virus’s probability that it will affect our devices. This leaves the door open for malicious activity like cybercrime. The more we start compromising with the device, the more we get into the well. The advancement in technology has to tend us to forget things, but at the same time, it can be proved as dangerous.

The more the intelligence, the more the power of a person to memorize a subject, from the scratchpad of consciousness to long-term memory, thereby filling the mind-system. When the facts and experiences enter our long-term memory, we can weave them into the complex ideas that give richness to our thought.

The scratchpad of our intelligence is being occupied by technology; the ability to think has degraded gradually as we flow towards the world of technology. Technology has commanded our human encephalon in such a manner that not only the ability to feel has vanished, but also we have ceased ourselves from giving extra burden on our encephalon to think on any matter.

Even a single internet user session can make it file away information in your memory when the working memory is experiencing a digital overhead. It’s like a glass of water overflowing. It has become a common issue that when we know a digital or tool will remember a piece of information for us, we are less likely to burden our brains external hard drive, explaining that new digital tools have replaced retaining the social aspect.

Remembering has historically become a social process in today’s modern technology field where we know certain facts and figures and share them with our known ones to fill us in on the things we have forgotten. That has become a history for today’s growing generation because people don’t take the extra burden to remember things and share their piece of knowledge with others in today’s modern technology era. People have arranged their lives like a pdf draft where every task is being performed and scanned by the technology, capturing knowledge or sharing them.

The internet changes everything; with nearly ubiquitous online access, many people may first perform a smartphone search rather than calling a friend. Nowadays, people consider calling a friend and acquiring the piece of mastery a waste of time because what a smartphone can perform within a second that a friend can do as he belongs to the same generation of modern technology where Google tech is the ultimate source of all the queries. 

Clive Thompson describes that rather than transforming to the experience of our human tribe, we are using tools like Google and Evernote as our’ phone a friend’ option when we need information. We are treating them like crazy memorious friends who are usually ready at hand.

The digital platform has led people to remember things in a faster and easier manner. It has snatched the ability of the one’s capacity to perform a specific task on their own. The word ‘own’ seems to disappear from the minds of people living in the modern age of technology. Where everything is handy, and people do not have to work hard to create a sure thing. The digital world has not only made us lazy, but also we have intended into plagiarism in any subject matter.

Attention is the key to forming strong memories. But due to the digital age, we are losing our patience, and we are being more aggressive than being more attentive. So a movie that was texted to you last night, and you watched it thoroughly. However, you will not be able to tell the exact movie details once asked by someone a few days later. Therefore, when when we do not pay attention, our memory fades gradually.

Sadly we often optimize knowledge into pieces that don’t have a home in a larger conceptual framework; when this happens, we surrender meaning to guardians of knowledge, and it loses its value. But we need to remember the smaller details. We need to remember the small details even in this world of technology. As we consider that technology is the key solution to every problem, but if we stop working our encephalon and give to technology, then at a point in time, our encephalon will stop working and will prohibit us from optimizing knowledge.

We are constantly losing the information that’s just come in, we are continually replacing it, and there is no place to hold what you have already gotten. It makes for a very superficial experience, as you have only got whatever is in your mind at that moment. On the contrary, we cannot keep a hold of what we are constantly outsourcing from the world of technology.

It is indeed a fact that we are blessed with many things to learn from the world of technology. Still, at the same time, if we peep into the disadvantages of the digital age, we get to focus on the fact that modern technology has intended us to forget as well as remember certain things which we need to remember. But on the other side of the bright digital life, we get to relish many things and even get an opportunity to transform ourselves with modern technology; as days are passing by, people are evolving and growing themselves with the contemporary spheres of technology, but at the same time people are losing their attention and their willingness to create their own rather than depending on the internet.

Digital amnesia is not a one-way street. Technology may be helping us remember more than it has caused us to forget. Memorization differs from placing. The technology may decrease the need to memorize certain things, but it does not come as a barrier to learning things manually. There is never an easy or shortcut way of achieving a target. Even technology needs to be mastered to produce 100 percent efficiency. Memory is exercised when we use a computer or personal device; likewise, memory is also exercised in sports, art, and music. This is how we remember and forget things in the digital age.

Rather than following the path of plagiarism, we need to lead ourselves to create our own. The word ‘own’ must not remain a missing word in today’s modern era. Rather than people should take the help of the digital word to lend them create their own space? The dependency should not be a cause of forgetting and memorizing things and leading everything towards success. Instead, people should use their brains and even start exercising their brains to let their brains remember the small things that may lead to victory in the digital age battle.

People should open their pores and start removing all the blockages captured by their laziness and prove that it is possible to even remember things in this user-friendly and modern age of technology and make things more divine and fruitful. It’s a fact that advancement in technology had brought everything in hand, but we still need to remember things and make our brain function smoothly.