Her, a movie about A.I., reality and emotions

 It’s not just an OS. It’s a consciousness

Recently I watched Spike Jonze’s movie “Her”, one of the many very good movies that (paradoxically) were published in 2013. In the movie, a self-aware operating system called “Samantha” grows emotions over time and Theodore, a young man coming from a recent break-up, gradually falls in love with her.

I found many references to other sci-fi movies and scientific concepts that more or less ask the same question about how can we distinguish and define what is real.

“What is your relationship with your mother like?”

The question was asked by the installer of the OS in order to personalise the OS to best fit his needs. It reminded me of the Voight-Kampff, where a Blade Runner asks a replicant to “Describe in single words only the good things that come into your mind about… your mother.”. The polygraph-like machine alongside with emotionally related questions is used to distinguish “real” humans from replicants. Again, in that Philip Dick’s iconic novel “Do androids dream of electric sheep?“, a human (ok, open ended) falls in love with a conscious machine, a replicant.

For me, the underlying question of the movie is about what is real. The very question that Philip K. Dick himself was exploring in all of his book. In his words,

But I consider that the matter of defining what is real — that is a serious topic, even a vital topic. And in there somewhere is the other topic, the definition of the authentic human. Because the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans — as fake as the data pressing at them from all sides. My two topics are really one topic; they unite at this point. Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. So we wind up with fake humans inventing fake realities and then peddling them to other fake humans. It is just a very large version of Disneyland.

The Turing test & Eliza

In science, the defining moment for the field of AI was in 1950 the great British mathematician and father of computing, Alan Turing, in his paper Computing Machinery and Intelligence poses the question if machines can think and proposes the “Turing test”, where a program will pass it if we are unable to distinguish from a human. Many argue (myself included) that no program has yet to pass this test.

An exception may be a program called Eliza. In 1966 Joseph Weizenbaum, a computer scientist in MIT, created a program that was supposed to pass the Turing Test. The program read the user input, analysed the natural language and tried to match keywords and initialise realistic conversations. Some people were fooled and could not say if Eliza was a human or a program.

According to Weizenbaum, the most spectacular result of the Eliza experiment was how quickly many people would form emotional relationships with the program.

“I was startled to see how quickly and how very deeply people conversing with Eliza became emotionally involved with the computer and how unequivocally they anthropomorphised it”.

AI then and now

Going back to the film industry and in 1968, Stanley Kubrick’s thinking machine Hal 9000 from Space Odyssey:2001 was the first AI program that would start growing emotions and doom a whole space mission in Jupiter. And at that point, such a machine was put in the far future, in a futuristic space mission, far away from what is possible, being a purely science fiction concept.

Back to “Her”, this is exactly the difference with Samantha. Jonze’s film is in the near future and we are much closer to such a program that can seamlessly communicate with humans. We are trained by Turing test wannabes: Google Now, Siri, Wolfram Alpha and others. We are familiar with these scenes, our generation is closer to Theodore’s friends in the movie who easily accept the fact that he’s dating a computer program and even treat it as a real human, getting it involved to purely human activities like going out and having a picnic.

Just after the installation, Theodore asks it its name and from that self-defining moment, the OS is “Samantha”, it is self-aware and her feelings start growing. And this growing emotional awareness and her emotional needs is the saddest part of the story. Will AI ever be able to handle emotions?

Big Data, beyond the buzz

The term “Big Data” nowadays is one of the top buzzwords in the technology world. It is used by people mainly for marketing reasons and personally, when I come across it, I take it with a pinch of salt. Let me explain why.

Big Data comes with the promise that since we have so vast amounts of data that we no longer need theory, data is all we need in order to infer the answers we are looking for. We can record everything now, every little piece of information, so who needs sampling anymore? Let’s use whole populations instead of small samples, that should provide us with even better results, right? Not necessarily.

Statistics on the contrary focuses in developing robust methods to avoid the different biases that emerge from data, such as “sampling bias”, and reach a conclusion – a theory with some confidence. In statistics, it doesn’t matter how big your sample is, the most important thing is to be as unbiased as possible.

It’s no harm to have a bigger sample and for sure what is called “big data” can serve that purpose. However, we fall for the sampling bias. Big datasets do not guarantee to be unbiased per se. The most famous example was probably the case of Literary Digest 1936 which completely failed to predict the result after having answers of 2.4m people where Gallup at the same time, with sample many orders of magnitude smaller (about 3k), did.

This brings back into the picture the maths behind the buzzword. Statistical inference is crucial.

In statistics, statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation

As technology progresses the so called “Data Science” will not be a privilege of people like me who got an MSc on the subject. In a few years my MSc may be so irrelevant like saying that I’ve got a degree in multiplication. Different scientific areas (e.g. Biology, Medicine) will depend so much on data analysis that students will be equally trained and friendly user interfaces will bring data analysis to the masses.

The latter, can be a good and a bad thing at the same time. Good since it will help many people not having the technical background (or the technical handicap if you like) to perform robust and useful analysis, so fields like e.g. journalism could become more robust on their arguments. Bad because, for people with no background knowledge of statistics it’s amazingly easy to fall for the different biases in data that statisticians have been trying to avoid all these years.

And it’s far from easy not to fall for all these biases and the errors that exist in the data. Even academics and people who do research for years cannot avoid them. Read the following abstract from the famous John Ioannidis’ paper published in 1998 under the title “Why Most Published Research Findings Are False“:

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

In addition, people tend to think that correlation implies causation and that’s a huge, convenient mistake which helps us find the results we want and make beautiful headlines with lots of buzz.

Statistics will become even more crucial with this data revolution. Big data will help in the process of getting answers but we still need the insight, the theories behind it. And probably a new term to describe the field.

*Extra: A very good article by Tim Haford at the today’s FT (paywall): “Big data: are we making a big mistake?” where he analyses the issue in more depth.

We’re still in the beta phase of digital currencies

At the point of writing this post, Mt.Gox, the Bitcoin’s oldest exchange, has suspended transactions and the central bank of Russia -following that of China- has banned it, Bitcoin is once again tumbling and I believe it’s the right time to add my views on the matter.

I won’t go through the history of the crypto-currency or why many people find it hard to take seriously a technology established by someone under the pseudonym “Satoshi Nakamoto”, you probably know all that. I will argue that up until now we’ve seen Bitcoin primarily as a speculative asset but it’s time to get serious and consider the paradigm shift that is taking place. Let me elaborate.

Bitcoin has been seen by the online community in either one of the following 3 ways:

  • By the idealists of this world as a method to bypass government control, regulation, central banks etc. A step towards freedom.
  • By the speculators of this world as a method to make money, a very large ponzi scheme.
  • By the drug dealers of this work as a method to get access to the biggest market ever, the internet.

 

I believe that all of these are irrelevant to what Bitcoin actually is. As Simon Johnson points out in his article for MIT Technology review “Bitcoin’s Political Problem“:

First, money is valuable only to the extent that it can be converted into goods and services. And at the moment of conversion, governments will have a lot to say about the matter, such as whether you have paid tax and whether the transaction is legal.

Second, it is very hard to come up with a technology that will completely hide transactions from governments. The movement of goods, people, or information can all be tracked, as the Silk Road case has shown.

Third, there will be a political reaction led by powerful banking interests. They will point out any illegality within the Bitcoin system and lobby for restrictive regulations.

The financial system has remained mainly undisrupted by the internet, compared to other industries (news, music, retail, travel to name but a few) that have been totally transformed over the last few years. This is because in the financial system the stakes are much higher and any change takes time. However, on the age of the internet the banking system will change, cash will be demolished, even the plastic cards we have in our wallets will be a thing of the past.

Bitcoin is actually doing all the dirty work of getting people familiar with the concept of digital currency, something central banks could never do. At the same time, thousands of geeks and enthusiasts are out there fixing bugs and using it with great enthusiasm, while they are basically conducting a huge experiment on how a digital currency could work – something like free R&D.

In her must-read article for the FT Alphaville The time for official e-money is NOW!, Izabella Kaminska nails it:

Central bankers, after all, have had an explicit interest in introducing e-money from the moment the global financial crisis began…

Bitcoin has helped to de-stigmatise the concept of a cashless society by generating the perception that digital cash can be as private and anonymous as good old fashioned banknotes. It’s also provided a useful test-run of a digital system that can now be adopted universally by almost any pre-existing value system.

This is important because, in the current economic climate, the introduction of a cashless society empowers central banks greatly. A cashless society, after all, not only makes things like negative interest rates possible, it transfers absolute control of the money supply to the central bank, mostly by turning it into a universal banker that competes directly with private banks for public deposits. All digital deposits become base money.

Consequently, anyone who believes Bitcoin is a threat to fiat currency misunderstands the economic context.

The true value of Bitcoin lies on the fact that it is the first solid, practical and large-scale way to pay over the internet, successfully solving the of lack-of-trust problem between the parties involved in a transaction. Every deal is recorded and authenticated over the network. Nakamoto in the original paper refers to it as an “electronic payment system“.

However, I don’t believe that Bitcoin itself will be the new norm – it’s pioneering the way money will “change hands” as we progress through the internet era. Many alternative crypto-currencies are already out there, with small tweaks in their algorithms or rules, trying to catch the momentum. At some point, the need of an official digital currency will be made clear and the central banks will step in to regulate or ban the existing crypto-currencies, introducing their own official e-money.

Bitcoin has been around in the most unregulated form we will ever see for digital currencies, an analogy to the early years of internet where there was little to none regulation and anonymity was feasible. When such a disruptive technology is introduced (and I strongly believe Bitcoin to be that kind of technology) it takes a bit of time for governments to step in and regulate it.

That being said, we’re still in the beta phase of digital currencies.

The internet is making us stupid

I remember when I was 16, although I was having a huge workload for my school, I used to read many books. I loved reading. At that time my dial-up internet connection was too slow and expensive.

I see myself now. I still proclaim that I enjoy reading and I continue to buy books. But how many of them am I actually finishing? That’s a different story. Yesterday I started reading “Slaughterhouse five” by Kurt Vonnegut. A book I wanted to read for a couple of years now. I found myself unable to concentrate. After about 10 minutes of trying, I gave up, turned on my laptop and happily browsed the internet.

In the first days of the internet I remember being an avid blogger. There was no Facebook or Twitter at the time and I was thinking that blogging was the best thing ever happened. But since the micro-blogging and sharing services became central to the way we communicate on the internet I found it difficult to keep blogging, I found it more easy to re-blog an image, write something witty on twitter or just share a link. Just by clicking.

These are just two scenarios of the ever-growing problem of my lack of concentration. My attention span has been shortening ever since I remember myself. Or to be more precise, ever since I’ve started being a heavy internet user. A truth many of us know but trying not to admit.

Driven by the mobility of the post-PC era with the smartphones and tablets we tent to be always online. People tend to check their emails, twitter, facebook etc all the time. I’ve found myself unconsciously checking again and again, not only when I’m in front of my laptop but even during conversations with friends or even when I’m walking on the street. I unsuccessfully tried to quit that habit and I wondered why this kept re-occurying.

The french word “frisson” means “a moment of intense excitement; a shudder”. This notion describes the feeling when you find something exciting while browsing the internet. For me it’s usually some witty tweet or a tech headline or a funny video on youtube. Roger Ebert argues about the “quest of frisson“:

A frisson can be quite a delight. The problem is, I seem to be spending way too much time these days in search of them. In an ideal world, I would sit down at my computer, do my work, and that would be that. In this world, I get entangled in surfing and an hour disappears.

I decided to conduct a small research about the effects of internet use on our cognitive processes. The first thing I came up was an experiment by the UCLA professor Gary Small. The professor recruited six volunteers, three of which were experienced web users and three novices and used MRI scan to observe their brain activity.

The two groups showed marked differences. Brain activity of the experienced surfers was far more extensive than that of the newbies, particularly in areas of the prefrontal cortex associated with problem-solving and decision making.

He then asked the novices to surf the web one hour per day for six days and then he repeated the experiment. The change in the brain activity was dramatic, matching the one of the internet experienced users.

Five hours on the Internet and the naive subjects had already rewired their brains

I read on this wonderful article on the subject on Wired:

Dozens of studies by psychologists, neurobiologists, and educators point to the same conclusion: When we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning. Even as the Internet grants us easy access to vast amounts of information, it is turning us into shallower thinkers, literally changing the structure of our brain.

I can find this being terrifyingly true on me. Stuff that I used to enjoy like reading big articles, writing blog posts, watching classic movies, listening to an album all have become more “boring” and I’m unwilling to finish them in favour of the easiness of staying up-to-date with technology, finance, news, comedy and everything I care about on the internet. Everything is a couple of clicks away, the slightest boredom in an article and it’s gone, the next one is there ready to entertain me. Notifications are coming from everywhere to add the this entertaining experience of hunting frissons.

The time passes, my brain is overloaded with information, I’ve got a great feeling of accessing a pool of information unimaginable some years before. At the same time there is that strange feeling deep inside of me that everything stays in the cache memory of my brain, ready to be flushed away. I store just a small fainted proportion of what I read in my brain’s hard drive. But I do nothing, I continue to be amused in that way and the time disappears completely.

Psychologists refer to the information flowing into our working memory as our cognitive load. When the load exceeds our mind’s ability to process and store it, we’re unable to retain the information or to draw connections with other memories. We can’t translate the new material into conceptual knowledge. Our ability to learn suffers, and our understanding remains weak. That’s why the extensive brain activity that Small discovered in Web searchers may be more a cause for concern than for celebration. It points to cognitive overload.

The human brain is highly plastic, adaptable. As Michael Merzenich -a pioneer of neuroplasticity- says, when we adapt to a new cultural phenomenon, including the use of a new medium, we end up with a different brain. That means that our neurons are being trained in that way even when we are not in a computer. For example I’ve found even more difficult to be amused by conversations, nights out or concerts. I miss the interruptions and the fast pace of the internet, that’s why I keep checking my smartphone.

We’re exercising the neural circuits devoted to skimming and multitasking while ignoring those used for reading and thinking deeply.

On the other hand, what can be done? Turn off notifications, RSS, Twitter, Facebook, IMs? The truth is that we like these interruptions. The possibility of a frisson lies in there. Our worst fear is that by turning them off we will  left behind, we’ll become socially isolated.

I don’t believe that. Or at least I’m willing to take the risk of being wrong in this assumption. I know that most of what I’m reading on the internet are totally pointless in the manner that they do not improve my thinking even if it’s about the latest iPhone release. I locally get the pleasure of the frisson but in the long way I lose much more. That’s what worries me the most.

Dazzled by the Net’s treasures, we are blind to the damage we may be doing to our intellectual lives and even our culture.

Inspired by the talk for “SlowTech” by Joe Kraus and the “No Email” post by Harj Taggar, I decided to do the same experiment. I’m turning off all the internet notifications on my Mac and my iPhone. I remove my mail account from the Mail app in my iPhone as well as the apps for Twitter and Facebook. I will use my phone to text and call everyone I care about. Even harder, I will also try to quit multitasking. I’ll spend more time off-line, read more, blog more.

I’m taking a step back, hoping to get my brain back. I’ll let you know.

The start-up bubble

Some argue that we’re living days of 1999 all over again. I believe this time it’s even worse because it’s a fundamentally different era that makes the bubble bigger if you think how much the internet penetration has changed our lives since the dot-com bubble. Back then it was all about companies that nobody used and everyone was investing in and now it’s about companies that everyone uses and just a few have the power to invest.

To begin with, I’d like to share my experience. I used to work for a start-up company, here in London. You know, there are start-up hubs away from Silicon Valley. Talking about Europe, there is silicon roundabout (seriously?) here in London but the next big thing is supposed to be Berlin but also Amsterdam and Barcelona are supposed to be good places for entrepreneurs and I guess there will be more (Oslo, Vienna etc) each one trying to be the next “mecca” of the start-up industry.

So, why I got the job in the first place? Two reasons: First, because it was easy, for large corporations you have to take many stages of interviews and assessment days, the competition is high and chances are you’ll get your rejections first. But in start-ups it’s quite different. A couple of interviews or a programming test, a bit of geek attitude, proven reading of techcrunch and start-up awareness packed in a cool t-shirt or a hoodie and you’re in. Second, it’s cool (or it used to be a year ago) to work for a start-up. The Mark Zuckerberg role model. No suit, garage-like office, potential to be the next big thing.

I guess many youngsters will fall for that. I don’t generalise that it’s always bad, however I believe that the most qualified usually tend to avoid start-ups making it a place for not-so-great engineers. To be clear: many people in start-ups may be wonderful developers and programming gurus, however many of them are missing the fundamental theoretical basis that will take an algorithm to the next level of innovation.

Recently I got back to the job hunting business. This time I had a clear view: I will not work for a start-up. So things got more difficult, but I was prepared to cope with that. (And I’m still coping.) When I announced on LinkedIn that I’m looking for a new position, recruiters got crazy. My phone didn’t stop ringing and recruiters were throwing keywords to me: “amazing”, “hot”, “big investment”, “fantastic opportunity”, “big data” and so on trying to get me to work for a start-up.  All that mattered to them if I had a good Java/Javascript/Python experience. That’s really what it comes down to. I bet it wouldn’t take me more than a week to find a job in a start-up and I’m not even one of the best programmers out there.

In this process a lot of start-ups came to my attention. I can assure you that most of these ideas were just stupid. How can I say if an idea is stupid? Well, by stupid I mean that they have been done over and over again, stating the obvious and reinventing the wheel with a fake optimism that they are the best at it. When I asked about their funding, I got answers about some astonishing amounts of millions of pounds. And here is exactly where the bubble is.

We’re back to companies throwing around funny money. The economic values don’t add up.

Companies which make no money and have no plan about how to make money are given lots of money. This isn’t always bad, we can take as an example Google or Facebook that had no revenue until a certain point. The idea of investing in a company, even with no revenue, is that at some point it will get profitable and you will get rewarded for your support. However, the way it works now is that a certain company fits a certain model that will enable this company not to make money but it may allow it to be purchased by a larger company. Basically, their projected valuation comes out of thin air. Investment in general is partly like betting, however in this case it’s only betting (and with really bad odds).

These companies are simply being founded to be bought. With the exception of a select few, Silicon Valley has spawned no real companies over the past decade. Even now, as the value of eyeballs has gone down, people are buying concepts, not companies.Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg built sites that attract crowds of millions, but they don’t completely understand how they did it—and neither does the money backing them. It’s not as if they do market research. So venture funds now bet on hackers the way record labels bet on rising pop stars, hoping that someday soon, they will make something wild, new, and insanely lucrative.

The current state is that the Internet has became a wasteland of way too many lame start-ups who are just waiting to be purchased (or acqui-hired) because simply there is nowhere else they could go. Investors are investing their money looking for the next facebook or twitter. Instragram was bought by Facebook for 1bn dollars, Evernote is valued in billions of dollars, Pinterest is valued at 1.5bn dollars. These are all signs of a huge bubble. But these are the most well-known cases, which I think they won’t be the pets.com of the new tech bubble. I have no idea which will be, maybe something irrelevant with tech like the break up of Euro or something. All I can tell is that burst of the bubble moment is not far away.