How To Protect Us From Artificial Intelligence? - Alternative View

Table of contents:

How To Protect Us From Artificial Intelligence? - Alternative View
How To Protect Us From Artificial Intelligence? - Alternative View
Anonim

The idea that artificial intelligence will inevitably lead us to a scenario in which machines rebel against humans is quite popular. Artificial superintelligence seems to be the greatest possible threat, and the fantastic stories according to which we will not be needed in a world owned by technology have never lost popularity.

Is it inevitable?

The literary and cinematic portrayal of intelligent computer systems from the 1960s has helped shape and summarize our expectations for the future as we embark on the journey of creating machine intelligence that surpasses human intelligence. AI has obviously already surpassed humans in certain specific tasks requiring complex computation, but still lags behind in a number of other capabilities. How to simultaneously increase the power of this harsh tool and preserve our economic position over it.

As artificial intelligence already plays and will continue to play a large role in our future, it is imperative to explore our possibilities of coexisting with these complex technologies.

Kevin Abosch, founder of Kwikdesk, a data processing and artificial intelligence company, shared his thoughts on this. He believes that artificial intelligence should be fast, inconspicuous, reliable, literate and ethical. Yes, ethical.

Ethical framework

Promotional video:

The concept of an artificial neural network, modeled after a biological neural network, is nothing new. Units of computing power, called neurons, connect to each other to form a network. Each neuron applies a sophisticated learning algorithm on the input, before transmitting data to other neurons, until the neuron on the output is activated and opens the possibility for reading. Expert systems rely on people to “teach” the system, planting the seeds of knowledge. Logic engines look for matches, make choices, set if-this-then-then rules to the knowledge base. In this process, new knowledge is added to the knowledge base. A pure neural network learns in the process of gaining non-linear experience, does not have the problem of seeding knowledge by an expert. Hybrid networks have provenwhich improve the learning capabilities of machines.

Now let's look at the ethical issues of such systems. Further from the first person.

Image
Image

"Bad code" versus good code

The author uses words to immerse the reader in a fictional world, and does it in different ways, but great authors do it very gracefully. A software engineer writes lines of code that facilitate the processing and movement of data. He, too, can choose from a number of options in different ways, but the graceful coders are computer science folkers. The progressing coder focuses on how to encapsulate as much and better as possible in short and neat code. Redundant code is kept to a minimum. Great code also keeps the window open for future additions. Other engineers can add code with their inherent elegance, and the product develops seamlessly.

Any man-made product is based on intention. Things made by people are saturated with intentions, and to one degree or another are carriers of the very nature of the creator. Some people find it difficult to imagine an inanimate object of this nature. But many would agree with this. The energy of intentions has existed for thousands of years, unifies, divides, unites, transforms society. Nor should the power of language be underestimated. Don't forget that lines of code are written in a specific programming language. Thus, I am convinced that the code that becomes software used on computers or mobile devices is very "alive".

Without considering wisdom and spirituality in the context of computer science and the potential consequences of artificial intelligence, we can still view static code as a single unit with the potential to "do good" or "do evil." These outputs only find themselves in the process of using applications by people. It is the clear choices people make that affect the nature of the application. They can be viewed in a local system, determining the positive or negative impact on that system, or based on a set of predefined standards. However, just as a journalist cannot be 100% impartial in the process of writing an article, so an engineer willingly or unwittingly adds the nature of his intentions to the code. Some might argue that writing code is a logical process, and real logic leaves no room for nature.

But I'm sure that the moment you create a rule, a block of code, or all of the code, it is all imbued with an element of human nature. With each additional rule, the penetration of the species deepens. The more complex the code, the more of this nature in it. Hence the question arises: "Can the nature of the code be good or evil?"

Obviously, a virus developed by a hacker that maliciously breaks through your computer's defenses and wreaks havoc in your life is saturated with an evil nature. But what about a virus created by the good guys to infiltrate a terrorist organization's computers to prevent terrorist attacks? What is its nature? Technically it could be identical to its nefarious counterpart, just used for "good" purposes. So his nature is kind? This is the whole ethical paradox of malware. But we could not ignore it, thinking about the "evil" code.

In my opinion, there is code that inherently gravitates towards "evil" and there is code that is inherently biased towards goodwill. This is more important in the context of offline computers.

Image
Image

At Kwikdesk, we are developing an AI framework and protocol based on my expert system / neural network hybrid design, which most closely resembles a biological model. Neurons manifest themselves as I / O modules and virtual devices (in a sense, autonomous agents) connected by "axons", secure, separated channels of encrypted data. This data is decrypted as it enters the neuron and after certain processes is encrypted before being sent to the next neuron. Before neurons can communicate with each other through an axon, a key exchange between the participant and the channel must take place.

I believe that security and separation should be built into such networks from the lowest level. Superstructures reflect the qualities of their smallest components, so anything smaller than safe building blocks will result in unsafe operation of the entire line. For this reason, the data must be protected locally and decrypted when transferred locally.

Implementation and guarantees

The quality of our life together with machines that are getting smarter and smarter is understandably a concern, and I am absolutely confident that we must take action to ensure a healthy future for generations to come. The threats to smart machines are potentially diverse, but can be broken down into the following categories:

Reservation. At workplaces, people will be replaced by machines. This shift has been going on for decades and will only accelerate. Adequate education is needed to prepare people for a future in which hundreds of millions of traditional jobs will simply cease to exist. It's complicated.

Security. We rely on machines completely and will continue to rely. As we increasingly trust machines as we move from a safe area to a potential danger area, we may face the risk of machine error or malicious code. Think about transportation, for example.

Health. Personal diagnostic devices and networked medical data. AI will continue to evolve in preventive medicine and crowdsourced genetic data analysis. Again, we must have assurances that these machines will not engage in malicious subversion or harm us in any way.

Destiny. AI predicts with increasing accuracy where you will go and what you will do. As this area develops, he will know what decisions we make, where we will go next week, what products we will buy, or even when we die. Do we want others to have access to this data?

Knowledge. Machines de facto accumulate knowledge. But if they acquire knowledge faster than humans can test it, how can we trust its integrity?

In conclusion, I want to note that a vigilant and responsible approach to AI to mitigate potential troubles in a technological supernova explosion is our way. We will either tame the potential of AI and pray that it brings only the best to humanity, or we will burn out in its potential, which will reflect the worst in us.

Ilya Khel