from using technology to find intimacy with one another, to intimacy with technology – SAG 27 Apr 2016
1. Promises of Artificial Intelligence
Good introduction to AI concepts: Plug and Pray – Von Computern und anderen Menschen – Dokumentarfilm von Jens Schanze 2009
“My main objection was that when the program says ‘I understand’, then that’s a lie. There’s no one there. … I can’t imagine that you can do an effective therapy for someone who is emotionally troubled by systematically lying to him.” Joseph Weizenbaum, Plug and Pray documentary, from~ 30’30
“At the beginning of all this computer stuff, in the first 15 years or so, it was very clear to us that you can give the computer a job only, if you understand that job very well and deeply. Otherwise you can’t ask it to do it. Now that has changed. If we don’t understand a task, then we give it to the computer, who then is being asked to solve it with artifical intelligence. … But there’s a real danger. We’ve seen that in many, in almost all areas where the computer has been introduced, it’s irreversible. At banks, for example. And if we start to rely on artifical intelligence now, and it’s not reversible, and then we discover that this program does something we first don’t understand and then don’t like. Where does that leave us?” Plug and Pray documentary, 25’36 – 26’50 (an interview from 1987)
At the other end of the spectrum:
“We already have many examples of what I call narrow artificial intelligence … – and within 20 years, around 2029, computers will really match the full range of human intelligence. So we’ll be more machine-like than biological. So when I say that people say, I don’t want to become a machine. But they’re thinking about today’s machines, like this. Now we don’t want to become machines like this. I’m talking about a different kind of machine. A machine that’s actually just as subtle, just as complex, just as endearing, just as emotional as human beings. Or even more so. Even more exemplary of our moral, emotional and sprititual intelligence. We’re going to merge with this technology. We’re going to become hybrids of biological and non-biological intelligence, and ultimately be millions of times smarter than we are today. And that’s our destiny.” Ray Kurzweil, (from minute 6.30)
→ says we are going to MERGE with this technology. A longing to be expanded, connected, rescued from state of being cut off.
Kraftwerk, 1970s: We are the Robots https://www.youtube.com/watch?v=VXa9tXcMhXQ –– a longing to be one with the machine, machine-like (very different from James Brown’s idea though). Is this chiefly a male desire, to be free of the mess of emotional confusion, ambiguity?
→ also see Günther Anders, Die Antiquiertheit des Menschen, Band I, C. H. Beck, München 1956. “Promethean Shame”, man’s feeling of inadequacy in view of his creations.
2. Tell Computers and Humans Apart
Turing Test: https://en.wikipedia.org/wiki/Turing_test
A Turing Test we’re all familiar with: CAPTCHA: “Completely Automated Public Turing test to tell Computers and Humans Apart”) https://en.wikipedia.org/wiki/CAPTCHA
reCAPTCHA (human users exploited for machine learning) https://www.google.com/recaptcha/intro/index.html#ease-of-use
Contemporary example: Try anything hosted on Cloudflare via Tor, and you’ll have to prove you’re not a bot: https://support.cloudflare.com/hc/en-us/articles/203306930-Does-CloudFlare-block-Tor-
3. Live Bots to talk to
Designed as an artifical psychiatrist. So from the start the machine aims to make deep contact, get right into people’s minds. Also described as a “parody” of a therapist.
Emulation on a web server: http://nlp-addiction.com/eliza
Anectdote recounted by Charlie Brooker, author of Black Mirror (watched last time): “Joseph Weizenbaum, tthe inventor of one of the first chat bots, Eliza, had his secretary test her out. Shortly after she asked him to leave, because even though she knew it was a machine she was talking to, she was having a very intimate conversation and wanted to be alone.”
Sie: Sind sie ein Mensch?
Julia: Ich bin genauso echt wie Sie.
Sie: Warum sagen Sie, dass Sie echt sind?
Julia: Leider kann ich Ihnen keine weitere Begründung geben. Sie können mir aber eine neue Frage stellen!
Sie: Sind sie ein Mensch
Julia: Ich antworte, also bin ich! Real, virtuell, echt – macht das im Jahr 2016 noch einen Unterschied!?
Tries to sell you access to porn if you mention sex.
→ The training goes both ways: Computers condition us to become more like them.
‘In Turing’s dream scenario, chatbots will actually push us to be better conversationalists and clearer thinkers. As Will put it, reflecting on the chatbot experiment, “having Pinocchio-like robots that can think, feel and discriminate morally will broaden our concept of humanity, challenging us organic humans to be better, more sensitive, imaginative creatures.” Amen to that.’
- Chris Marker: DIALECTORThe film maker disappeared from public view for more than 3 years in the 80s, instead spending his time on programming a chat bot on what was then the tinkerer’s machine of choice, an Apple IIc. The resulting program is unlike any other conversational piece of software. At a show recently at HMKV Dortmund the curator explained that the source code is only 30 pages long, but it’s so complex that no one has yet been able to find out how it does what it does. http://dialector.poptronics.fr/index.html
→ DIALECTOR is so far the only bot that on its own decides to end a conversation.
The interesting and worrying part of the entire test was that it became a plausible, creative racist asshole. A lot of the worst things that Tay is quoted as saying were the result of users abusing the “repeat” function, but not all. It came out with racist statements entirely off its own bat. It even made things that look disturbingly like jokes. Antipope.org
4. Projections for the Future of Bots
- “’tech developments in other areas are about to turn the whole “sex with your PC” deal from “crude and somewhat rubbish” to “looks like the AIs just took a really unexpected job away'”.
There is no reason why bots won’t get linked to something like a masturbating machine. Tip: don’t google “AUTOMATED TELEDONICS”.
” For a lot of people I suspect the non-human nature of the other party would be a feature, not a bug – much like the non-human nature of a vibrator. “
The update explains what really happened: Non native speaker can play predefined voice snippets to have a ‘conversation’. It doesn’t say much about the much anticipated technological progress in AI, but it says a lot about new ways that people relate to each other.
“before this goes any further, I think we should get tested. You know, together.” “Don’t you trust me?” – “I just want to be sure.
Bottom line: aggressive, disruptive bots will be unavoidable, and might make a lot of the Internet as we know it know very toxic. What does Tay mean for future of politics + social media?
– who is responsible if a bot breaks the law? Will robots in future become be able to get punished for their actions? See Asimov’s Robot Laws. Also re “Samantha West”:
But the peculiar thing about Samantha West isn’t just that she is automated. It’s that she’s so smartly automated that she’s trained to respond to queries about whether or not she is a robot by telling you she’s a human. I asked [industry expert Chris] Haerich if there is a regulation against robots lying to you.
“I don’t…know…that…,” she said. “That’s one I’ve never been asked before. I’ve never been asked that question. Ever.”
- “Sorry, I can’t let you do that, Dave.” (clearly we can’t talk about AI without referencing HAL)
- Ashley madison chatbots ‘milking’ unsuspecting horny customers:
→ Ashley Madison hack, turned out to the millions of men who signed up there were 70.000 fake profiles run by female bots, called “engagers”. This was part of a large fraud operation, tricking male subscribes into paying to see messages, get in „touch“.
- opening soon: !Mediengruppe Bitnik
is anyone home lol http://www.kunsthauslangenthal.ch/index.php/bitnik-huret.en/language/de.htm
In their new project for Kunsthaus Langenthal, !Mediengruppe Bitnik uses Bots, that is, programmes executing automatic tasks, thereby often simulating human activity. In their project, tens of thousands of Bots, hacked from a dating platform, where they feign female users, will emerge as a liberated mass of artificial chat partners.
- Vito Acconci – Theme Song 1973
Look we’re both grown up, we don’t have to kid each other. I just need a body next to mine. It’s so easy here, so easy. Just fall right into me. Come close to me, just fall right into me. It’s so easy. No trouble. No problems. Nobody doesn’t even have to know about it. Come on, we both need it, right? Come on.” Stops tape. “Oh I know you need it as much as I do. Dadadadadada that’s all you need. Come on.
Dadadadadada that’s all you need. Come on.
Dadadadadada that’s all you need. Come on.
I know I need it, you know you need it. … we don’t have to kid ourselves. We don’t have to say this is gonna last. All that counts is now, right? My body is here. You body can be here. That’s all we want. Right?
for those that missed today’s seminar, here are my notes and some links
1. dating sites
as an example of a corporation that needs to intimately get to know you. They ask questions to get a (good enough) image of who you are, so they can recommend the right match.
Try it, just sign up https://www.okcupid.com (but use a temporary email and I’d say not when logged in to your everyday browser. I did it via Tor and using Spamgourmet https://www.spamgourmet.com/index.pl?languageCode=EN )
Here’s another way to flesh out your digital double: https://www.okcupid.com/tests/the-are-you-really-an-artist-test
(by the way, here are my results:
So does that mean that first you’d have the Data Doubles falling in love, then the real life yous just have to follow suit?
Some notes on dating algorithms and methodology by Christian Rudder, statistician
“The ultimate question at OkCupid is, does this thing even work? By all our internal measures, the “match percentage” we calculate for users is very good at predicting relationships. It correlates with message success, conversation length, whether people actually exchange contact information, and so on. But in the back of our minds, there’s always been the possibility: maybe it works just because we tell people it does. …”
“When we tell people they are a good match, they act as if they are. Even when they should be wrong for each other.”
Some more about falling in love: Experimentational Generation of Interpersonal Closeness
A psychological study: You only need to answer 36 questions to establish intimacy and trust. “Love didn’t happen to us. We’re in love because we each made the choice to be.”
“I first read about the study when I was in the midst of a breakup. Each time I thought of leaving, my heart overruled my brain. I felt stuck. So, like a good academic, I turned to science, hoping there was a way to love smarter”
Read: a way to make love safer and more convenient (the drive behind all of this IMHO).
2. “Be Right Back” episode of “Black Mirror”
The whole episode is online on Youtube if you want to re-watch. The DVD box set is in our Semesterapparat in the library.
It talks about, among other things, love, death and bereavement. It’s fairly didactic also, in the way it explains the limits of Facebook’s way of constructing your Data Double (i.e. when real-life Ash says he’s sharing the image of himself because it’s “funny” and the Ash the fleshbot then repeats it as face value.)
Charlie Brooker, creator of the series, explains in a panel discussion on the ideas that led to writing this story:
“I was spending a lot of time late at night looking at Twitter, and I was wondering, what if all these people were dead. Would I notice?”
Basically people’s reactions and posts are so formulaic that they’re entirely predictably. So we might as well get a robot to do it.
“Also there is the story of the inventor of one of the first chat bots, Eliza. He had his secretary test her out, and shortly after she asked him to leave, because even though she knew it was a machine she was talking to, she was having a very intimate conversation and wanted to be alone.”
3. Intimacy with robots
So it seems there is a huge market for intimacy with robots out there. Presumably it’s going to become a lot more visible soon. David Levy is a computer scientist who has done decades of research. In his book “Love and Sex with Robots” he recommends sex robots as the solution to many of our problems.
Press response see i.e. https://www.theguardian.com/technology/2015/dec/13/sex-love-and-robots-the-end-of-intimacy
The most prominent and profound critic of this development is Sherry Turkle, Professor of Social Studies of Science and Technology at MIT. In her work she focuses on human-technology interaction, and used to praise the Internet for the freedom it gave people in re-inventing themselves, trying out different personas. In recent years she has become increasingly critical of the way that technology limits the depth of human communication and interaction.
See her book “Alone Together” (in the library):
Facebook. Twitter. SecondLife. “Smart” phones. Robotic pets. Robotic lovers. Thirty years ago we asked what we would use computers for. Now the question is what don’t we use them for. Now, through technology, we create, navigate, and perform our emotional lives.
We shape our buildings, Winston Churchill argued, then they shape us. The same is true of our digital technologies. Technology has become the architect of our intimacies.
She starts the book with research about the emotional investment people make in toy robots. Describes how her research again and again has shown that people are more than happy to confide in robots and enter into intimate relationships with them.
People often find that robots are actually preferable to a live person. Unlike real pets, robot puppies stay puppies for ever. Your sex robot will always be young, willing, and only be there for you (and won’t think you have strange desires or are a bad performer). According to Turkle, the problem is that this is a reduction of the bandwidth of human experience as we used to know it. She quotes from her research with teenagers: “texting is always better than talking”, as it’s less risky. Risk-avoidance is at the heart of the desire for intimacy with robots. (Again, security and convenience.)
Interaction with robots is sold as “risk free”, whereas “Dependence on a person is risky – it makes us subject to rejection – but it also opens us up to deeply knowing each other.”
“The shock troops of the robotic moment, dressed in lingerie, may be closer than most of us have ever imagined. … this is not because the robots are ready but because we are.”
It’s in the library! From the introduction to the book: http://alonetogetherbook.com/?p=4
A quick TED talk about the ideas and research behind the book: TEDxUIUC – Sherry Turkle – Alone Together: https://www.youtube.com/watch?v=MtLVCpZIiNs
5. the conceptual basics of data doubles explained
A talk by Gemma Galdon-Clavell https://www.youtube.com/watch?v=0eifMYCfuBI. My rough notes:
“who do you think you are? who do you think the person next to you is? …
identities are a complex thing. we mess with our identities, we play with them, we’re not the same person at a job interview than when going out at night to party. we choose to show different things, evolve over time.
data: fixes things. States need fixed things.
not too long ago, the amount of personally identifiable data (PII) was limited. when you crossed a border. when you registered a car. when you got a speeding ticket. that kind of info got stored by the state. back then only the state was big enough to need a UID to track you.
now: is stored by a large number of actors: shopping (loyalty cards, credit cards), entertainment (video streaming “rental”, music streaming, online game platforms), social media (making up ~60 to 75% of total traffic), smart phones (full of sensors & apps. have sensors than can be used for more than you can imagine )… we leave data traces all the time and we have no control. we have no way of knowing where the data goes, it gets sold on, or is held in storage silos because people think it’s tomorrow’s oil. Companies might even not know what to do with it, but they gather it anyway now. They keep it just in case.
Data doesn’t just sit there but it’s being used in new and dynamic ways, all to build a model of you that is as exact as possible: the Data Double. You, in data. When you enter any business transaction with companies, they don’t make decisions on you, but on what they can learn about you from their databases. You think you’re sitting down with your banker, talking about that loan, but really the decision that he’s going to make is based on your credit scoring. Not how compelling you are in presenting your ideas. The score is presented to them in a color, they’ll just see a green or red light, and won’t even be able to find out how that rating came about. You’re trying to create an interaction, and the decision has been made beforehand. Same with web sites who decide how to interact with you based on the cookies in your browser.
example dating site: answer a few questions (this is usually done by cookies on other web sites). Based on those the dating site will decide who you are and provide recommendations what to avoid and what to look out for. So the data double is not only a representation of yourself, but it’s also shaping your future self, because suddenly your options have reduced drastically, and your perspective has narrowed.
states love data doubles, because it’s a lot easier to deal with data than it is to deal with people. People are complex, messy, can be annoying. Data is stable, fixed, doesn’T yell back at you. High temptation to substitute people with data (“The data gives me a good idea of what the people want that I represent in parliament”)
Increasing pressure to conform to the image that the data has about me. Example credit card fraud detection: do something unusual and it’ll flag it as probably fraudulent and won’t allow it.
Ends with: can I ever get out of this cage again? Does the data double forget? Forgive? (doesnt look like it). So: until we have the legal tools to deal with this direclty and fairly, the solution is sabotage. only give up data if it profits you.
Bottom line: most of it is being used for either advertising and/or prediction. To read more about the details, see i.e. here: Epic.org: Privacy and Consumer Profiling.
Come to our Cryptoparty on May 19 to learn about self-defense measures.
6. heated discussion ensues
7. Illustration: seminar participants categorization by unknown author
Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring
Research paper by Elizabeth Stoycheff
Abstract: Since Edward Snowden exposed the National Security Agency’s use of controversial online surveillance programs in 2013, there has been widespread speculation about the potentially deleterious effects of online government monitoring. This study explores how perceptions and justification of surveillance practices may create a chilling effect on democratic discourse by stifling the expression of minority political views. Using a spiral of silence theoretical framework, knowing one is subject to surveillance and accepting such surveillance as necessary act as moderating agents in the relationship between one’s perceived climate of opinion and willingness to voice opinions online. Theoretical and normative implications are discussed.
In “Data and Goliath”, Bruce Schneier says the same (it’s in the library in both English and German, see our Semesterapparat):
Across the US, states are on the verge of reversing decades-old laws about homosexual relationships and marijuana use. If the old laws could have been perfectly enforced through surveillance, society would never have reached the point where the majority of citizens thought those things were okay. There has to be a period where they are still illegal yet increasingly tolerated, so that people can look around and say, “You know, that wasn’t so bad.” Yes, the process takes decades, but it’s a process that can’t happen without lawbreaking. Frank Zappa said something similar in 1971: “Without deviation from the norm, progress is not possible.”
The perfect enforcement that comes with ubiquitous government surveillance chills this process. We need imperfect security — systems that free people to try new things, much the way off-the-record brainstorming sessions loosen inhibitions and foster creativity. If we don’t have that, we can’t slowly move from a thing’s being illegal and not okay, to illegal and not sure, to illegal and probably okay, and finally to legal.
This is an important point. Freedoms we now take for granted were often at one time viewed as threatening or even criminal by the past power structure. Those changes might never have happened if the authorities had been able to achieve social control through surveillance.
This is one of the main reasons all of us should care about the emerging architecture of surveillance, even if we are not personally chilled by its existence. We suffer the effects because people around us will be less likely to proclaim new political or social ideas, or act out of the ordinary. If J. Edgar Hoover’s surveillance of Martin Luther King Jr. had been successful in silencing him, it would have affected far more people than King and his family.
Law Professor Karen Levy writes about the rise of surveillance in our most intimate activities — love, sex, romance — and how it affects those activities.
This article examines the rise of the surveillant paradigm within some of our most intimate relationships and behaviors — those relating to love, romance, and sexual activity — and considers what challenges this sort of data collection raises for privacy and the foundations of intimate life.
Data-gathering about intimate behavior was, not long ago, more commonly the purview of state public health authorities, which have routinely gathered personally identifiable information in the course of their efforts to (among other things) fight infectious disease. But new technical capabilities, social norms, and cultural frameworks are beginning to change the nature of intimate monitoring practices. Intimate surveillance is emerging and becoming normalized as primarily an interpersonal phenomenon, one in which all sorts of people engage, for all sorts of reasons. The goal is not top-down management of populations, but establishing knowledge about (and, ostensibly, concomitant control over) one’s own intimate relations and activities.
After briefly describing some scope conditions on this inquiry, I survey several types of monitoring technologies used across the “life course” of an intimate relationship — from dating to sex and romance, from fertility to fidelity, to abuse. I then examine the relationship between data collection, values, and privacy, and close with a few words about the uncertain role of law and policy in the sphere of intimate surveillance.
(via Bruce Schneier)