technology is progressing – now one plane is enough for a constant CCTV stream of every street from above:
“Watch This Surveillance Master Dissect a Murder From the Sky” https://www.bloomberg.com/news/articles/2016-08-23/watch-this-surveillance-master-dissect-a-murder-from-the-sky
“surveillance master” – !?!!!?!!!!
great video effects I’d say. Strong analysis tools looking good.
The background story about the technology and where it’s being used:
“Secret Cameras Record Baltimore’s Every Move From Above” https://www.bloomberg.com/features/2016-baltimore-secret-surveillance/
new words learnt:
“circle of persistence”
“watch here… and here… and here… and here…”
more images provided the company: https://www.google.de/search?tbm=isch&q=site:www.pss-1.com
from using technology to find intimacy with one another, to intimacy with technology – SAG 27 Apr 2016
1. Promises of Artificial Intelligence
Good introduction to AI concepts: Plug and Pray – Von Computern und anderen Menschen – Dokumentarfilm von Jens Schanze 2009
“My main objection was that when the program says ‘I understand’, then that’s a lie. There’s no one there. … I can’t imagine that you can do an effective therapy for someone who is emotionally troubled by systematically lying to him.” Joseph Weizenbaum, Plug and Pray documentary, from~ 30’30
“At the beginning of all this computer stuff, in the first 15 years or so, it was very clear to us that you can give the computer a job only, if you understand that job very well and deeply. Otherwise you can’t ask it to do it. Now that has changed. If we don’t understand a task, then we give it to the computer, who then is being asked to solve it with artifical intelligence. … But there’s a real danger. We’ve seen that in many, in almost all areas where the computer has been introduced, it’s irreversible. At banks, for example. And if we start to rely on artifical intelligence now, and it’s not reversible, and then we discover that this program does something we first don’t understand and then don’t like. Where does that leave us?” Plug and Pray documentary, 25’36 – 26’50 (an interview from 1987)
At the other end of the spectrum:
“We already have many examples of what I call narrow artificial intelligence … – and within 20 years, around 2029, computers will really match the full range of human intelligence. So we’ll be more machine-like than biological. So when I say that people say, I don’t want to become a machine. But they’re thinking about today’s machines, like this. Now we don’t want to become machines like this. I’m talking about a different kind of machine. A machine that’s actually just as subtle, just as complex, just as endearing, just as emotional as human beings. Or even more so. Even more exemplary of our moral, emotional and sprititual intelligence. We’re going to merge with this technology. We’re going to become hybrids of biological and non-biological intelligence, and ultimately be millions of times smarter than we are today. And that’s our destiny.” Ray Kurzweil, (from minute 6.30)
→ says we are going to MERGE with this technology. A longing to be expanded, connected, rescued from state of being cut off.
Kraftwerk, 1970s: We are the Robots https://www.youtube.com/watch?v=VXa9tXcMhXQ –– a longing to be one with the machine, machine-like (very different from James Brown’s idea though). Is this chiefly a male desire, to be free of the mess of emotional confusion, ambiguity?
→ also see Günther Anders, Die Antiquiertheit des Menschen, Band I, C. H. Beck, München 1956. “Promethean Shame”, man’s feeling of inadequacy in view of his creations.
2. Tell Computers and Humans Apart
Turing Test: https://en.wikipedia.org/wiki/Turing_test
A Turing Test we’re all familiar with: CAPTCHA: “Completely Automated Public Turing test to tell Computers and Humans Apart”) https://en.wikipedia.org/wiki/CAPTCHA
reCAPTCHA (human users exploited for machine learning) https://www.google.com/recaptcha/intro/index.html#ease-of-use
Contemporary example: Try anything hosted on Cloudflare via Tor, and you’ll have to prove you’re not a bot: https://support.cloudflare.com/hc/en-us/articles/203306930-Does-CloudFlare-block-Tor-
3. Live Bots to talk to
Designed as an artifical psychiatrist. So from the start the machine aims to make deep contact, get right into people’s minds. Also described as a “parody” of a therapist.
Emulation on a web server: http://nlp-addiction.com/eliza
Anectdote recounted by Charlie Brooker, author of Black Mirror (watched last time): “Joseph Weizenbaum, tthe inventor of one of the first chat bots, Eliza, had his secretary test her out. Shortly after she asked him to leave, because even though she knew it was a machine she was talking to, she was having a very intimate conversation and wanted to be alone.”
Sie: Sind sie ein Mensch?
Julia: Ich bin genauso echt wie Sie.
Sie: Warum sagen Sie, dass Sie echt sind?
Julia: Leider kann ich Ihnen keine weitere Begründung geben. Sie können mir aber eine neue Frage stellen!
Sie: Sind sie ein Mensch
Julia: Ich antworte, also bin ich! Real, virtuell, echt – macht das im Jahr 2016 noch einen Unterschied!?
Tries to sell you access to porn if you mention sex.
→ The training goes both ways: Computers condition us to become more like them.
‘In Turing’s dream scenario, chatbots will actually push us to be better conversationalists and clearer thinkers. As Will put it, reflecting on the chatbot experiment, “having Pinocchio-like robots that can think, feel and discriminate morally will broaden our concept of humanity, challenging us organic humans to be better, more sensitive, imaginative creatures.” Amen to that.’
- Chris Marker: DIALECTORThe film maker disappeared from public view for more than 3 years in the 80s, instead spending his time on programming a chat bot on what was then the tinkerer’s machine of choice, an Apple IIc. The resulting program is unlike any other conversational piece of software. At a show recently at HMKV Dortmund the curator explained that the source code is only 30 pages long, but it’s so complex that no one has yet been able to find out how it does what it does. http://dialector.poptronics.fr/index.html
→ DIALECTOR is so far the only bot that on its own decides to end a conversation.
The interesting and worrying part of the entire test was that it became a plausible, creative racist asshole. A lot of the worst things that Tay is quoted as saying were the result of users abusing the “repeat” function, but not all. It came out with racist statements entirely off its own bat. It even made things that look disturbingly like jokes. Antipope.org
4. Projections for the Future of Bots
- “’tech developments in other areas are about to turn the whole “sex with your PC” deal from “crude and somewhat rubbish” to “looks like the AIs just took a really unexpected job away'”.
There is no reason why bots won’t get linked to something like a masturbating machine. Tip: don’t google “AUTOMATED TELEDONICS”.
” For a lot of people I suspect the non-human nature of the other party would be a feature, not a bug – much like the non-human nature of a vibrator. “
The update explains what really happened: Non native speaker can play predefined voice snippets to have a ‘conversation’. It doesn’t say much about the much anticipated technological progress in AI, but it says a lot about new ways that people relate to each other.
“before this goes any further, I think we should get tested. You know, together.” “Don’t you trust me?” – “I just want to be sure.
Bottom line: aggressive, disruptive bots will be unavoidable, and might make a lot of the Internet as we know it know very toxic. What does Tay mean for future of politics + social media?
– who is responsible if a bot breaks the law? Will robots in future become be able to get punished for their actions? See Asimov’s Robot Laws. Also re “Samantha West”:
But the peculiar thing about Samantha West isn’t just that she is automated. It’s that she’s so smartly automated that she’s trained to respond to queries about whether or not she is a robot by telling you she’s a human. I asked [industry expert Chris] Haerich if there is a regulation against robots lying to you.
“I don’t…know…that…,” she said. “That’s one I’ve never been asked before. I’ve never been asked that question. Ever.”
- “Sorry, I can’t let you do that, Dave.” (clearly we can’t talk about AI without referencing HAL)
- Ashley madison chatbots ‘milking’ unsuspecting horny customers:
→ Ashley Madison hack, turned out to the millions of men who signed up there were 70.000 fake profiles run by female bots, called “engagers”. This was part of a large fraud operation, tricking male subscribes into paying to see messages, get in „touch“.
- opening soon: !Mediengruppe Bitnik
is anyone home lol http://www.kunsthauslangenthal.ch/index.php/bitnik-huret.en/language/de.htm
In their new project for Kunsthaus Langenthal, !Mediengruppe Bitnik uses Bots, that is, programmes executing automatic tasks, thereby often simulating human activity. In their project, tens of thousands of Bots, hacked from a dating platform, where they feign female users, will emerge as a liberated mass of artificial chat partners.
- Vito Acconci – Theme Song 1973
Look we’re both grown up, we don’t have to kid each other. I just need a body next to mine. It’s so easy here, so easy. Just fall right into me. Come close to me, just fall right into me. It’s so easy. No trouble. No problems. Nobody doesn’t even have to know about it. Come on, we both need it, right? Come on.” Stops tape. “Oh I know you need it as much as I do. Dadadadadada that’s all you need. Come on.
Dadadadadada that’s all you need. Come on.
Dadadadadada that’s all you need. Come on.
I know I need it, you know you need it. … we don’t have to kid ourselves. We don’t have to say this is gonna last. All that counts is now, right? My body is here. You body can be here. That’s all we want. Right?
for those that missed today’s seminar, here are my notes and some links
1. dating sites
as an example of a corporation that needs to intimately get to know you. They ask questions to get a (good enough) image of who you are, so they can recommend the right match.
Try it, just sign up https://www.okcupid.com (but use a temporary email and I’d say not when logged in to your everyday browser. I did it via Tor and using Spamgourmet https://www.spamgourmet.com/index.pl?languageCode=EN )
Here’s another way to flesh out your digital double: https://www.okcupid.com/tests/the-are-you-really-an-artist-test
(by the way, here are my results:
So does that mean that first you’d have the Data Doubles falling in love, then the real life yous just have to follow suit?
Some notes on dating algorithms and methodology by Christian Rudder, statistician
“The ultimate question at OkCupid is, does this thing even work? By all our internal measures, the “match percentage” we calculate for users is very good at predicting relationships. It correlates with message success, conversation length, whether people actually exchange contact information, and so on. But in the back of our minds, there’s always been the possibility: maybe it works just because we tell people it does. …”
“When we tell people they are a good match, they act as if they are. Even when they should be wrong for each other.”
Some more about falling in love: Experimentational Generation of Interpersonal Closeness
A psychological study: You only need to answer 36 questions to establish intimacy and trust. “Love didn’t happen to us. We’re in love because we each made the choice to be.”
“I first read about the study when I was in the midst of a breakup. Each time I thought of leaving, my heart overruled my brain. I felt stuck. So, like a good academic, I turned to science, hoping there was a way to love smarter”
Read: a way to make love safer and more convenient (the drive behind all of this IMHO).
2. “Be Right Back” episode of “Black Mirror”
The whole episode is online on Youtube if you want to re-watch. The DVD box set is in our Semesterapparat in the library.
It talks about, among other things, love, death and bereavement. It’s fairly didactic also, in the way it explains the limits of Facebook’s way of constructing your Data Double (i.e. when real-life Ash says he’s sharing the image of himself because it’s “funny” and the Ash the fleshbot then repeats it as face value.)
Charlie Brooker, creator of the series, explains in a panel discussion on the ideas that led to writing this story:
“I was spending a lot of time late at night looking at Twitter, and I was wondering, what if all these people were dead. Would I notice?”
Basically people’s reactions and posts are so formulaic that they’re entirely predictably. So we might as well get a robot to do it.
“Also there is the story of the inventor of one of the first chat bots, Eliza. He had his secretary test her out, and shortly after she asked him to leave, because even though she knew it was a machine she was talking to, she was having a very intimate conversation and wanted to be alone.”
3. Intimacy with robots
So it seems there is a huge market for intimacy with robots out there. Presumably it’s going to become a lot more visible soon. David Levy is a computer scientist who has done decades of research. In his book “Love and Sex with Robots” he recommends sex robots as the solution to many of our problems.
Press response see i.e. https://www.theguardian.com/technology/2015/dec/13/sex-love-and-robots-the-end-of-intimacy
The most prominent and profound critic of this development is Sherry Turkle, Professor of Social Studies of Science and Technology at MIT. In her work she focuses on human-technology interaction, and used to praise the Internet for the freedom it gave people in re-inventing themselves, trying out different personas. In recent years she has become increasingly critical of the way that technology limits the depth of human communication and interaction.
See her book “Alone Together” (in the library):
Facebook. Twitter. SecondLife. “Smart” phones. Robotic pets. Robotic lovers. Thirty years ago we asked what we would use computers for. Now the question is what don’t we use them for. Now, through technology, we create, navigate, and perform our emotional lives.
We shape our buildings, Winston Churchill argued, then they shape us. The same is true of our digital technologies. Technology has become the architect of our intimacies.
She starts the book with research about the emotional investment people make in toy robots. Describes how her research again and again has shown that people are more than happy to confide in robots and enter into intimate relationships with them.
People often find that robots are actually preferable to a live person. Unlike real pets, robot puppies stay puppies for ever. Your sex robot will always be young, willing, and only be there for you (and won’t think you have strange desires or are a bad performer). According to Turkle, the problem is that this is a reduction of the bandwidth of human experience as we used to know it. She quotes from her research with teenagers: “texting is always better than talking”, as it’s less risky. Risk-avoidance is at the heart of the desire for intimacy with robots. (Again, security and convenience.)
Interaction with robots is sold as “risk free”, whereas “Dependence on a person is risky – it makes us subject to rejection – but it also opens us up to deeply knowing each other.”
“The shock troops of the robotic moment, dressed in lingerie, may be closer than most of us have ever imagined. … this is not because the robots are ready but because we are.”
It’s in the library! From the introduction to the book: http://alonetogetherbook.com/?p=4
A quick TED talk about the ideas and research behind the book: TEDxUIUC – Sherry Turkle – Alone Together: https://www.youtube.com/watch?v=MtLVCpZIiNs
5. the conceptual basics of data doubles explained
A talk by Gemma Galdon-Clavell https://www.youtube.com/watch?v=0eifMYCfuBI. My rough notes:
“who do you think you are? who do you think the person next to you is? …
identities are a complex thing. we mess with our identities, we play with them, we’re not the same person at a job interview than when going out at night to party. we choose to show different things, evolve over time.
data: fixes things. States need fixed things.
not too long ago, the amount of personally identifiable data (PII) was limited. when you crossed a border. when you registered a car. when you got a speeding ticket. that kind of info got stored by the state. back then only the state was big enough to need a UID to track you.
now: is stored by a large number of actors: shopping (loyalty cards, credit cards), entertainment (video streaming “rental”, music streaming, online game platforms), social media (making up ~60 to 75% of total traffic), smart phones (full of sensors & apps. have sensors than can be used for more than you can imagine )… we leave data traces all the time and we have no control. we have no way of knowing where the data goes, it gets sold on, or is held in storage silos because people think it’s tomorrow’s oil. Companies might even not know what to do with it, but they gather it anyway now. They keep it just in case.
Data doesn’t just sit there but it’s being used in new and dynamic ways, all to build a model of you that is as exact as possible: the Data Double. You, in data. When you enter any business transaction with companies, they don’t make decisions on you, but on what they can learn about you from their databases. You think you’re sitting down with your banker, talking about that loan, but really the decision that he’s going to make is based on your credit scoring. Not how compelling you are in presenting your ideas. The score is presented to them in a color, they’ll just see a green or red light, and won’t even be able to find out how that rating came about. You’re trying to create an interaction, and the decision has been made beforehand. Same with web sites who decide how to interact with you based on the cookies in your browser.
example dating site: answer a few questions (this is usually done by cookies on other web sites). Based on those the dating site will decide who you are and provide recommendations what to avoid and what to look out for. So the data double is not only a representation of yourself, but it’s also shaping your future self, because suddenly your options have reduced drastically, and your perspective has narrowed.
states love data doubles, because it’s a lot easier to deal with data than it is to deal with people. People are complex, messy, can be annoying. Data is stable, fixed, doesn’T yell back at you. High temptation to substitute people with data (“The data gives me a good idea of what the people want that I represent in parliament”)
Increasing pressure to conform to the image that the data has about me. Example credit card fraud detection: do something unusual and it’ll flag it as probably fraudulent and won’t allow it.
Ends with: can I ever get out of this cage again? Does the data double forget? Forgive? (doesnt look like it). So: until we have the legal tools to deal with this direclty and fairly, the solution is sabotage. only give up data if it profits you.
Bottom line: most of it is being used for either advertising and/or prediction. To read more about the details, see i.e. here: Epic.org: Privacy and Consumer Profiling.
Come to our Cryptoparty on May 19 to learn about self-defense measures.
6. heated discussion ensues
7. Illustration: seminar participants categorization by unknown author
Shoshana Zuboff on how we became Google’s slaves. A must-read.
Big other: surveillance capitalism and the prospects of an information civilization Update: Joscha sent in a better link: The Secrets of Surveillance Capitalism
German translation: Überwachungskapitalismus
Nearly 70 years ago historian Karl Polanyi observed that the market economies of the 19th and 20th centuries depended upon three astonishing mental inventions that he called fictions. The first was that human life can be subordinated to market dynamics and be reborn as labor. Second, nature can be subordinated and reborn as real estate. Third, that exchange can be reborn as money. The very possibility of industrial capitalism depended upon the creation of these three critical fictional commodities. Life, nature, and exchange were transformed into things, that they might be profitably bought and sold. The commodity fiction, he wrote, disregarded the fact that leaving the fate of soil and people to the market would be tantamount to annihilating them.
With the new logic of accumulation that is surveillance capitalism, a fourth fictional commodity emerges as a dominant characteristic of market dynamics in the 21st century. Reality itself is undergoing the same kind of fictional meta-morphosis as did persons, nature, and exchange. Now reality is subjugated to commodification and monetization and reborn as behavior. Data about the behaviors of bodies, minds, and things take their place in a universal real-time dynamic index of smart objects within an in finite global domain of wired things. This new phenomenon produces the possibility of modifying the behaviors of persons and things for profit and control. In the logic of surveillance capitalism there are no individuals, only the world-spanning organism and all the tiniest elements within it.
Talking points and links from today’s seminar, in chronological order:
Amazon delivery drones are coming http://blog.khm.de/surveillant_architectures/?p=1628
Ostfriesen testen Bierdrohne https://www.youtube.com/watch?v=e2oVw39rk1U
Matrix Sentinel https://www.google.com/search?tbm=isch&q=matrix%20sentinel&tbs=imgo:1
autonome Kampfroboter: “Das Gesicht unserer Gegner von morgen” http://www.faz.net/aktuell/feuilleton/debatten/krieg-mit-drohnen-das-gesicht-unserer-gegner-von-morgen-11897252.html?printPagedArticle=true#pageIndex_2
a science fiction novel in our Semesterapparat: Daniel Suarez, Kill Decision New York, NY: Penguin , 2013 . – 495 S
Rezension von Dietmar Dath dazuin der FAZ vom 21.7.2012: http://www.faz.net/aktuell/feuilleton/buecher/thriller-kill-decision-von-daniel-suarez-wie-technik-die-welt-zum-schlechteren-wendet-11826693.html?printPagedArticle=true#pageIndex_2
Surveillance in Science Fiction. About how most things that were once science fiction are now here and in use. Plus a list of actual surveillance measures deployed right now.
http://rhizome.org/editorial/2012/jun/6/natural-history-surveillance/ linking to:
Martha Rosler quoting from Philip K. Dick: Vulcan’s Hammer http://www.martharosler.net/projects/drone2.html
Animals and drones – chimpanzee https://www.youtube.com/watch?v=wPidiiaovL4
Animals attacking drones
“Thank you, animals, for being able to express how we all feel.”
+ another comment below:
“drones shouldnt be considered property, if i destroy them, it is self defense in 100% of cases, as a person with programming knowledge, I know you can program to do literally anything you desire for them to do, and since, by sight i cannot know what they are programmed to do, and yet at sight they may be able to harm me, i have the right to disable them, they are hazardous anima they cannot be of traditional perception of “property” .. when i can see them, they infringe upon me explicitly.”
These Shotgun Shells Are Made for Shooting Down Drones http://makezine.com/2015/08/19/these-shotgun-shells-are-made-for-shooting-down-drones/
Long-Distance Jammer Is Taking Down Drones http://makezine.com/2015/10/16/research-company-takes-aim-uavs-portable-anti-drone-rifle/
new problems! but what about robot rights? https://www.schneier.com/blog/archives/2015/08/shooting_down_d.html
Drones that shoot back https://www.youtube.com/watch?v=xqHrTtvFFIs
Prototype Quadrotor with Machine Gun! https://www.youtube.com/watch?v=SNPJMk2fgJU
Basically these things exist in reality and have been used in war for quite some time now.
The Intercept – Drone Papers https://theintercept.com/drone-papers
Al-Quaida anti drone instructions: http://hosted.ap.org/specials/interactives/_international/_pdfs/al-qaida-papers-drones.pdf
Eben Moglen: Time To Apply Asimov’s First Law Of Robotics “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” http://www.forbes.com/sites/andygreenberg/2012/06/26/eben-moglen-time-to-apply-the-first-law-of-robotics-to-our-smartphones/
typical news photo 2015 – it’s now OK to photograph someone from above and use that in newspapers
Big Data using Satellite view: skybox startup
“Inside a Startup’s Plan to Turn a Swarm of DIY Satellites Into an All-Seeing Eye” http://www.wired.com/2013/06/startup-skybox/
“Skybox Imaging empowers global businesses to make better decisions with timely, high fidelity imagery and infinite analytics.” http://www.skyboximaging.com/products/analytics
Introducing SkyNode – yes let’s try and order this for seminar use: http://www.skyboximaging.com/products#access
In the end, the aim is to have a live street-view. Example full-HD videos: https://www.youtube.com/watch?v=BsW6IGc4tt0&index=1&list=PLIIuwfzJSzET1C0KDpA5FZrHLlDGfcHr3
conflict in Tripoli https://www.youtube.com/watch?v=OWXN3CXsxTg
The discussion strayed into Big Data territory (even though we try to concentrate on aerial issues), so let’s have this excellent essay as a last point:
James Bridle, booktwo.org: Big Data, No Thanks http://booktwo.org/notebook/big-data-no-thanks/
Let me use this as a notebook to make sure I have this article handy whenever I need reminding what’s going on.
Technology should be used to create social mobility – not to spy on citizens
NSA and GCHQ mass surveillance is more about disrupting political opposition than catching terrorists
Tuesday 10 March 2015
Why spy? That’s the several-million pound question, in the wake of the Snowden revelations. Why would the US continue to wiretap its entire population, given that the only “terrorism” they caught with it was a single attempt to send a small amount of money to Al Shabab?
One obvious answer is: because they can. Spying is cheap, and cheaper every day. Many people have compared NSA/GCHQ mass spying to the surveillance programme of East Germany’s notorious Stasi, but the differences between the NSA and the Stasi are more interesting than the similarities.
The most important difference is size. The Stasi employed one snitch for every 50 or 60 people it watched. We can’t be sure of the size of the entire Five Eyes global surveillance workforce, but there are only about 1.4 million Americans with Top Secret clearance, and many of them don’t work at or for the NSA, which means that the number is smaller than that (the other Five Eyes states have much smaller workforces than the US). This million-ish person workforce keeps six or seven billion people under surveillance – a ratio approaching 1:10,000. What’s more, the US has only (“only”!) quadrupled its surveillance budget since the end of the Cold War: tooling up to give the spies their toys wasn’t all that expensive, compared to the number of lives that gear lets them pry into. (…)
not self-driving yet, but hey
Sounds convincing. Liberty is losing out against security and comfort. Basically, people want what rich people have, and rich people have no personal privacy. They are surrounded by servants who know everything about them.
Two Thoughtful Essays on the Future of Privacy
Paul Krugman argues that we’ll give up our privacy because we want to emulate the rich, who are surrounded by servants who know everything about them:
Consider the Varian rule, which says that you can forecast the future by looking at what the rich have today — that is, that what affluent people will want in the future is, in general, something like what only the truly rich can afford right now. Well, one thing that’s very clear if you spend any time around the rich — and one of the very few things that I, who by and large never worry about money, sometimes envy — is that rich people don’t wait in line. They have minions who ensure that there’s a car waiting at the curb, that the maitre-d escorts them straight to their table, that there’s a staff member to hand them their keys and their bags are already in the room.
And it’s fairly obvious how smart wristbands could replicate some of that for the merely affluent. Your reservation app provides the restaurant with the data it needs to recognize your wristband, and maybe causes your table to flash up on your watch, so you don’t mill around at the entrance, you just walk in and sit down (which already happens in Disney World.) You walk straight into the concert or movie you’ve bought tickets for, no need even to have your phone scanned. And I’m sure there’s much more — all kinds of context-specific services that you won’t even have to ask for, because systems that track you know what you’re up to and what you’re about to need.
Another essay that argues that we have entered recursive hall of mirrors of seeing and being seen, and what that means to how we will develop in future. Reminds me of the analogy between privacy and undeveloped film – you need a part of yourself that’s not exposed to light (yet), if you want to be able to retain your integrity as a person:
Daniel C. Dennett and Deb Roy look at our loss of privacy in evolutionary terms, and see all sorts of adaptations coming:
The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before — and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age-old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.
The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct. Just as a living cell needs an effective membrane to protect its internal machinery from the vicissitudes of the outside world, so human organizations need a protective interface between their internal affairs and the public world, and the old interfaces are losing their effectiveness.
The 1990s crypto-wars seem to get started again. Under the new proposed measures it will be illegal to use secure end-to-end crypto like GPG, or even iMessage and Whatsapp. It’s even more important to learn how to use it then. We’re going to have another Cryptoparty at the upcoming Chaos.Cologne conference here at the KHM in May. Or just talk to us and we’ll show you how. It’s not hard to get started.
What you can do:
Surveillance Self-Defense (in English) https://ssd.eff.org/
Digitale Selbstverteidigung (auf deutsch) https://digitalcourage.de/support/digitale-selbstverteidigung
All Cameras Are Police Cameras
Great essay by James Bridle, about the advances in locking down public space.
Surveillance images are all “before” images, in the sense of “before and after”. The “after” might be anything: an earthquake, a riot, a protest, a war. Any system reliant on flow, which is all networks from vehicle traffic to commercial supply to video feeds to the internet itself, view disruptions within the same negative moral context. Surveillance images attain the status of evidence for unknown crimes the moment they are created, and merely await the identification of the moment they were created for. Automated imagery criminalises its subject.
Read on for what happened just walking down the streets of London documenting security cameras.
I wonder what happened to the imagery here? Strangely drained of colour and, presumably, all meta data.