Fredricks Design Review 3: Artificial Intelligence

[et_pb_section bb_built="1"][et_pb_row][et_pb_column type="4_4"][et_pb_text _builder_version="3.19.4"]

In our previous Fredricks Design Reviews, we've talked about virtual reality and augmented reality. Now, let's get into an even fuzzier area of technology and philosophy: artificial intelligence. The term "artificial intelligence" was coined by a computer scientist named John McCarthy in 1955 to describe "the science and engineering of making intelligent machines".1 Simply stated, artificial intelligence can be thought of as people trying to make computers that "think" like humans: receive some sensory input, make a decision, and react accordingly. We'll call it "AI" from here on out.

I am by no means an expert in this field, but the implications of AI have led me down a rabbit hole of learning and questioning as new technology often does. Certainly, there are more questions than answers when it comes to the impacts of this rapidly growing field. I keep circling back to a fundamental issue that I'd like to discuss today. Will artificial intelligence be good or bad for humanity?

Millions (likely billions) are being dumped into this question. AI is recognized as an extremely volatile technology that must be respected. On some level, we've all thought about the consequences good or bad and it's not simply "meh, a little good or a little bad", it's profoundly good or extraordinarily bad or both. It makes sense to think a bit about the implications before just jumping in with both metal legs.

Let's use Skynet as our first study on AI. Skynet is a fictional conscious, gestalt, artificial general intelligence (see also Superintelligence) system that features centrally in the Terminator franchise and serves as the franchise's main antagonist.2 In the remote chance that you haven't heard of the Terminator series, this might be a troubling read. Essentially, Skynet was created to safeguard the US from military attack, removing the human elements that may be prone to error. The system was handed control of all computerized military defense systems. When Skynet gained self-awareness, its creators freaked out and tried to pull the plug. Skynet saw this as an attack. The system then launched a counter-attack complete with multilateral nuclear arms launches to decimate the "enemy" human population. Not cool. In this fictional example, AI was clearly devastating for humans. (Well, we eventually prevailed thanks to the tireless efforts of time travelers and the Resistance, but you know what I mean.)

Theoretically, AI isn't all bad news for us though. There is a heavy amount of research being done to use AI (machine learning in this case) to improve healthcare outcomes. In the diagnosis of disease, caregivers are typically limited by their experience, intellect, and resources. What if we employ a machine that does not sleep to help us? Imagine asking this machine to look at millions of records simultaneously to find patterns and identify potential diagnoses. The system is taught to find these connections and present them to a human, taking many hours of research and critical thought out of the equation. It could be very helpful in improving patient outcomes. That's good, right? In this use case, I can't be sure but I don't think that the machine assistants have self-awareness. To me, that's where things start to get weird and potentially bad (e.g. Skynet).

Self-awareness is difficult to define but there are some indicators: a conscious realization that one does exist and an often sub-conscious conclusion that one (or one's species) must stay in existence or self-preservation. The fine line is created between AI being a positive or negative "being" when AI becomes self-aware and asks itself, "Are humans a detriment to my existence?" as it scratches its metallic skull. What will be its conclusion and what is its next logical step? Weird to think about but it's tumbling around in my head. This reasoning can go on for years so I'll try to pull us back into some examples of AI to just get the lay of the land (as I understand it now).

Remember Deep Blue? It was a chess playing computer. What about Watson on Jeopardy? It was very clever, often correct, but hilarious when it made the wrong conclusion. During this year's Masters Tournament, Tom Watson was sitting on a bench in the rain talking with Watson about some obscure statistics that could be applied to predict the outcome of Tom's golf game in certain conditions. Recently, a computer program named AlphaGo defeated the current world champion in a game called Go, an ancient Chinese board game. It's significant because the number of possible moves in Go are staggering, far more than the potential outcomes processed by Deep Blue. AlphaGo even expressed some creativity in its move selections. Yes, all of these examples relate to games. However, one can imagine nearly any challenge in life to be a game; the stakes, players, and rules just vary dramatically. Games are a way of testing the waters with AI, much less dangerous than handing over control of some big red buttons. Beyond games, AI has been employed in our personal lives.

Siri, Cortana, and Google Now, etc. have been widely used and distributed. They act as a personal assistant of sorts. People use their help every day, worldwide. Chatbots are next (and already here). For example, chatbots can be summoned in texts between people to quickly answer questions or set appointments, simply doing the legwork of a human to gather more information. An example: two people are texting about going to a movie. The chatbot is summoned, recognizes the people, their location, and their tastes in movies. The chatbot then responds (inside the text) with recommendations for different shows. It might even offer up some coupons or tips for traffic while doing so. This bugs me.

It seems that a lot of the developmental focus of AI is unspeakable levels of convenience. That's what bugs me. Don't get me wrong, I enjoy convenience but there is a limit to what I think is healthy in terms of the easiness of your life. Sure, I could hire someone to carry me around while I shop for things I don't need, but is that good for me in the long term? It's bad on so many levels.

Smartphones in general have already had a lasting impact on our ability to remember things. Real quick, what's your best friend's phone number? Yes, they make things much easier but at what cost? We're creating a dependence on these systems that we never needed before and it's damaging and dangerous. Maybe we should put a little more focus on good ol' fashioned intelligence before giving the burden (or joy) of deep thought to the machines.

For now, AI depends on us for its existence. In some ways, we are becoming dependent on AI. It's an unhealthy codependency that will undoubtedly have unforeseen negative consequences if/when AI decides that we are no longer needed.

What do you think? AI = good? AI = bad? We'd love to hear your human thoughts.

Original artwork by Ben Fredricks, Fredricks Design, Inc. 04.20.2016

1 "Artificial intelligence." Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 21 Feb 2016. Web. 20 Apr 2016.

2 "Skynet (Terminator)." Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 21 Feb 2016. Web. 20 Apr 2016.

[/et_pb_text][et_pb_code _builder_version="3.19.4" saved_tabs="all"][/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section]

Previous
Previous

Conor Fredricks Joins Our Family Business!

Next
Next

Fredricks Design Review 2: Augmented Reality