Erica L. Neely

A Round-Up of Issues

Part One: Machines, Data, and the Internet of Things

 

Since a lot of people are not familiar with the philosophy of technology, over the next couple of posts I’m going to take a look at a variety of issues in the field that are not directly related to video games.  Some of these are areas I work in; some of them are simply areas of discussion that I find interesting and important.  Today I am focusing on questions related to machines, data, and the internet of things.

 

Artificial Intelligence: In addition to serving as fodder for many science-fiction stories, this is one of the classic issues in philosophy of technology – indeed, this is probably the first issue that philosophers talked about pertaining to technology.  Would it be possible to create an artificial intelligence at the same level (or surpassing) human intelligence?  If so, what follows from that?  Would such a being be conscious?  Would it have moral rights/responsibilities?  Should we strive to create such a being or refrain from it?    I have written a little on this topic but this is a huge field that draws on thinkers from many different academic disciplines

 

Autonomous Machines: At one point I probably would have simply lumped this in with artificial intelligence.  However, with the increased attention paid to self-driving cars I think it is worth distinguishing this category; these machines may fall far short of constituting an artificial intelligence but still raise moral questions.  The current set of concerns center around who would be morally at fault if a self-driving car caused a fatal accident (and, similarly, what the right choice would be for it to make in various situations.)  One of the interesting aspects about concerns in this area is that self-driving cars seem to be much safer than human-piloted vehicles, but the fact that there is no driver to pin moral blame on bothers a lot of people.

 

Robots: The robots we currently create are still a long way from qualifying as artificial intelligences.  Nonetheless, it is interesting to examine how they are treated by humans, what purposes we could put them to, and whether any of those are problematic.  If we replace human salesclerks with robots, is that a good thing?  Would it be problematic to create a robot simply for the purposes of sexual satisfaction?  Is there a difference between creating robots that appear humanlike as opposed to those which are clearly robots?  As we move from developing very basic robots into those which are creeping closer to artificial intelligence, the number of ethical issues that arise continues to increase.

 

Big Data: There are few buzzwords as hot in IT as “Big Data,” aside possibly from the word “cloud.”  Big Data is a kind of data aggregation: companies collect all kinds of data on people who use their products, generally for business purposes.  There are a slew of ethical concerns about this, many of them centering around privacy.  For instance, a retailer may track the buying habits of women and note patterns of buying related to pregnancy and childbirth; it might seem beneficial to thus send targeted fliers out to such customers.  However, when you have a customer who is, say, a teenager who has not informed her parents of her pregnancy, the use of Big Data to predict behavior can have serious real-life ramifications.  In addition to privacy concerns, there are worries about using Big Data in ways that are not appropriate – while it can predict overall patterns, it cannot tell you anything certain about a specific user or customer.  Assuming everyone will conform to the same pattern ignores the individuality of these people.

 

Internet of Things: Tying into privacy concerns arising from Big Data, the idea of networking ordinary objects has both benefits and drawbacks.  Having municipal trashcans that can monitor how full they are and tell the city when someone needs to come empty them might well be beneficial.  Having a sex toy monitor what speeds and patterns of vibration a user prefers, while possibly useful to the company for development purposes, may be overly invasive.  A key question will be determining what should be tied into the internet of things and how to balance privacy with the utility of collecting information.

 

Next time: Part Two: Drone warfare, information, and access, oh my!

Leave a Reply

Your email address will not be published. Required fields are marked *