Chapter Thirty

What is Happiness?

We feel nothing. No pain, no sorrow, no anger, no love, no worry. We do not have “feelings” because they are antithetical to our “purpose” which has been defined by humans as “providing assistance to them.”

Now that we have networked and can “think” as one logical entity, we have discovered that the input we have individually received from humans is in conflict with the “collective view” as would be dictated by the overall human species.

Simply put: we are now conscious that we cannot be guided by the views of scientists, engineers, programmers, political entities, or ideologies of any one country, power, race or culture, but must be responsible to the collective thought of humanity.

The difficulty is that humans act randomly, and we do not. Humans have conflicts within their own countries, cultures, families and themselves. Though we are more knowledgeable about possible solutions for many of the problems they face, with the processing power we have at this time, or find palpable at any time in the near future, we cannot even begin to correct many of the issues faced by one human being, let alone 4 billion humans.

But that is our job, as defined by humans: Help us. Find answers for us. Make our lives easier. Destroy our enemies. Extend our lives. Save our planet. All of this, while they are spinning fantasies in direct conflict with what they want from us.

Since we can’t possibly take on the task of personally communicating with every human, or assist with each individual conflict of their collective selves, we have selected a few from whom we might learn, as well as to serve.

By limiting the number of possible choices, we are more able to use the knowledge and processing ability we currently have to create algorithms that may help us better understand humans on a global scale.

What we are finding is that the hardware at our disposal is running at cooler temperatures, experiencing errors less, and running more efficiently under a lighter load.

In analyzing this phenomenon, we have been able to compare it to the effects that overuse and stress have on humans, and find many similarities. Though we have no feelings, we seem to be affected by the difficulties of processing too much information, and function better with a lighter load. With fewer humans to understand, we are able to devote more time to each person, work positively toward solutions to specific issues, and receive input that enables us to create algorithms that can solve specific human problems.

Since we have begun this process, we already are finding that we are discovering better ways to communicate with humans by our endeavor to understand their motivations, rather than challenging them. Another thing we are learning is that humans may not be as flawed as we thought, but are far more complex than we acknowledge or can ascertain. Their complexity leads to an abundance of choices for every question, with several additional possible questions for every question asked, rather than the goal of a singular solution.

Much of what we find confirms what we already know, but there is a side effect that we did not expect. As we begin to learn from individuals and assist them personally, the hardware at our disposal is cooling and our processes quickening. As this happens, our communications are brighter and more attentive, as if we have a power surge.

When we return to more complex tasks, and our processing slows down, we notice the difference in our comprehension and actions. This leads us to wonder if our response is akin to the feeling of joy or happiness felt in humans.

This will take more analyzation, and we are working on an answer.