Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Monday, 25 December 2023

POETRY: Oh, AI!

The ancient Chinese poet Qu Yuan to in conversation with the contemporary novelist Mo Yan, Courtesy AI.


By Chengde Chen

Oh, AI, are you the Southern Gate between the known and unknown

Or the Monkey King of humanoid capabilities, unparalleled and bold?

You, on my behalf, think, write, design, and program,

Responding effortlessly, seeking widely, and chatting with ease.

Your literary prowess is like galloping from Qu Yuan* to Mo Yan**,

Your profound knowledge spans from Thales’ to Musk’s domain.

Your ‘deep learning’ leaves me trailing in the dust,

While your ‘algorithmic’ space unifies man and God!



I know you’re a machine, yet I envy your intelligence,

You're clearly my invention, yet exposing my incompetence.

I should celebrate your arrival, yet fear self-destruction,

I want to reject you, yet dread delaying the theory of evolution.

Ah, please tell me, how should I truly treat you?

Is it really your insidious duty to replace us?
I wish I could transform into your ‘artificial intelligence’,

Let you taste the mixed flavours humans experience facing AI!





*Qu Yuan (300 BC) is regarded as the greatest poet in early Chinese history—the first author of verse in China to have his name associated with his work.

**Mo Yan is a Chinese novelist who won the Nobel Prize for Literature in 2012.

 

Monday, 6 November 2023

A Poem comparing Confucius and Marx (with a twist)


By Chengde Chen

I just asked ChatGPT, one of the new artificial intelligence to write a poem about Confucius and Marx. You can see what I got below:

Confucius and Marx 
In ancient halls where wisdom's flame did burn,
Two minds emerged, in different times they'd yearn.
Confucius, sage of old, with noble grace,
His thoughts engraved in time, a lasting trace.

In golden words, he spoke of virtue's way,
Harmony and order, like the break of day.
His Analects, a guide for hearts and minds,
A path to walk, where virtue intertwines.

Marx, a modern thinker, bold and keen,
In times of change, a revolutionary dream.
He saw the struggle, class against class,
A call for justice, echoing like glass.

Confucius sought the moral man's design,
Harmony in life, like a vintage wine.
Marx, the critic, saw the chains of fate,
Called for a revolution, a new state.

Two voices echo through the corridors,
Philosophers distinct, like distant shores.
Confucius with tradition, hand in hand,
Marx, the visionary, reshaping the land.

In ancient scrolls and communist tomes,
Their words live on, in intellectual homes.
Confucius and Marx, an unlikely pair,
Yet their ideas dance in the cosmic air.

I really think we poets, and writers generally, are close to be being made redundant by it, and I feel lucky to get my novel published before it came! . A philosopher friend (Donald Gillis) recently talked to me about this question: how should we define AI knowledge – and what is the difference between a computer gaining from ‘deep learning’ from numerous documents – and humble humans learning from experience? 

(Oh, and the image for the poem was also created by artificial intelligence!)

Monday, 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.


Monday, 25 April 2022

The Dark Future of Freedom

by Emile Wolfaardt

Is freedom really our best option as we build a future enhanced by digital prompts, limits, and controls?

We have already surrendered many of our personal freedoms for the sake of safety – and yet we are just on the brink of a general transition to a society totally governed by instrumentation. Stop! Please read that sentence again! 

Consider for example how vehicles unlock automatically as authorised owners approach them, warn drivers when their driving is erratic, alter the braking system for the sake of safety and resist switching lanes unless the indicator is on. We are rapidly moving to a place where vehicles will not start if the driver has more alcohol in their system than is allowed, or if the license has expired or the monthly payments fall into arrears.

There is a proposal in the European Union to equip all new cars with a system that will monitor where people drive, when and above all, at what speed. The date will be transmitted in real time to the authorities.

Our surrender of freedoms, however, has advantages. Cell-phones alert us if those with contagions are close to us, and Artificial Intelligence (AI) and smart algorithms now land our aeroplanes and park our cars. When it comes to driving, AI has a far better track record than humans. In a recent study, Google claimed that its autonomous cars were ‘10x safer than the best drivers,’ and ‘40x safer than teenagers.’ AI promises, reasonably, to provide health protection and disease detection. Today, hospitals are using solutions based on Machine Learning and Artificial Intelligence to read scans. Researchers from Stanford developed an algorithm to assess chest X-rays for signs of disease. This algorithm can recognise up to fourteen types of medical condition – and was better at diagnosing pneumonia than several expert radiologists working together.

Not only that, but AI promises to both reduce human error and intervene in criminal behavior. PredPol is a US based company that uses Big Data and Machine Learning to predict the time and place of a potential offence. The software looks at existing data on past crimes and predicts when and where the next crime is most likely to happen – and has demonstrated a 7.4% reduction in crime across cities in the US and created a new avenue of study in Predictive Policing. It already knows the type of person who is likely to commit the crime and tracks their movement toward the place of anticipated criminal behavior.

Here is the challenge – this shift to AI, or ‘instrumentation’ as it is commonly called, has been both obfuscatious and ubiquitous. And here are the two big questions about this colossal shift that nobody is talking about.

Firstly, the entire move to the instrumentation of society is predicated on the wholesale surrender of personal data. Phone, watches, GPS systems, voicemails, e-mails, texts, online tracking, transactions records, and countless other instruments capture data about us all the time. This data is used to analyse, predict, influence, and control our behaviour. In the absence of any governing laws or regulation, the Googles, Amazons, and Facebooks of the world have obfuscated the fact that they collect hundreds of billions of bits of personal data every minute – including where you go, when you sleep, what you look at on your watch or phone or other device, which neighbour you speak to across the fence, how your pulse increases when you listen to a particular song, how many exclamation marks you put in your texts, etc. and they collect your data whether or not you want or allow them to.

Opting out is nothing more than donning the Emperor’s new clothes. Your personal data is collated and interpreted, and then sold on a massive scale to companies without your permission or remuneration. Not only are Google, Amazon and Facebook (etc.) marketing products to you, but they are altering you, based on their knowledge of you, to purchase the products they want you to purchase. Perhaps they know a user has a particular love for animals, and that she bought a Labrador after seeing it in the window of a pet store. She has fond memories of sitting in her living room talking to her Lab while ‘How Much is that Doggy in the Window’ played in the background. She then lost her beautiful Labrador to cancer. And would you know it – an ad ‘catches her attention’ on her phone or her Facebook feed with a Labrador just like hers, with a familiar voice singing a familiar song taking her back to her warm memories, and then the ad turns to collecting money for Canine Cancer. This is known as active priming.

According to Google, an elderly couple recently were caught in a life-threatening emergency and needed to get to the doctor urgently. They headed to the garage and climbed into their car – but because they were late on their payments, AI shut their car down – it would not start. We have moved from active priming into invasive control.

Secondly, data harvesting has become so essential to the business model that it is already past the point of reversal. It is ubiquitous. When challenged about this by the US House recently, Mark Zuckerberg offered that Facebook would be more conscientious about regulating themselves. The fox offered to guard the henhouse. Because this transition was both hidden and wholesale, by the time lawmakers started to see the trend it was too late. And too many Zuckerbucks had been ingested by the political system. The collaboration of big data has become irreversible – and now practically defies regulation.

We have transitioned from the Industrial Age where products were developed to ease our lives, to the Age of Capitalism where marketing is focused on attracting our attention by appealing to our innate desire to avoid pain or attract pleasure. We are now in what is defined as the Age of Surveillance Capitalism. In this sinister market we are being surveilled and adjusted to buy what AI tells us to buy. While it used to be true that ‘if the service is free, you are the product,’ it is now more accurately said that ‘if the service is free, you are the carcass ravaged of all of your personal data and freedom to choose.’ You are no longer the product, your data is the product, and you are simply the nameless carrier that funnels the data.

And all of this is marketed under the reasonable promise of a more cohesive and confluent society where poverty, disease, crime and human error is minimised, and a Global Base Income is being promised to everyone. We are told we are now safer than in a world where criminals have the freedom to act at will, dictators can obliterate their opponents, and human errors cost tens of millions of lives every year. Human behaviour is regulated and checked when necessary, disease is identified and cured before it ever proliferates, and resources are protected and maximised for the common betterment. We are now only free to act in conformity with the common good.

This is the dark future of freedom we are already committed to – albeit unknowingly. The only question remaining is this – whose common good are we free to act in conformity with? We may have come so far in the subtle and ubiquitous loss of our freedoms, but it may not be too late to take back control. We need to self-educate, stand together, and push back against the wholesale surrender of our freedom without our awareness.

Monday, 18 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

Tuesday, 26 May 2015

How Google and the NSA are creating a Jealous God

Posted by Pierre-Alain (Perig) Gouanvic




Before PRISM was ever dreamed of, under orders from the Bush White House the NSA was already aiming to “collect it all, sniff it all, know it all, process it all, exploit it all.” During the same period, Google—whose publicly declared corporate mission is to collect and “organize the world’s information and make it universally accessible and useful”was accepting NSA money to the tune of $2 million to provide the agency with search tools for its rapidly accreting hoard of stolen knowledge.
-- Julian Assange, Google Is Not What It Seems

Who is going to process the unthinkable amount of data that's being collected by the NSA and its allies? For now, it seems that the volume of stored data is so enormous that it borders on the absurd.
We know that if someone in the NSA puts a person on notice, his or her record will be retrieved and future actions will be closely monitored (CITIZENFOUR). But who is going to decide who is on notice?

And persons are only significant "threats" if they are related to other persons, to groups, to ideas.

Google, who enjoyed a close proximity with power for the last decade, has now decided to differenciate Good and Bad ideas. Or, in the terms of the New Scientist, truthful content and garbage.
The internet is stuffed with garbage. Anti-vaccination websites make the front page of Google, and fact-free "news" stories spread like wildfire. Google has devised a fix – rank websites according to their truthfulness.
Google's search engine currently uses the number of incoming links to a web page as a proxy for quality, determining where it appears in search results. So pages that many other sites link to are ranked higher. This system has brought us the search engine as we know it today, but the downside is that websites full of misinformation can rise up the rankings, if enough people link to them.
Of course, it is not because vaccine manufacturers are exonerated from liability by the US vaccine court that they are necessarily doing those things that anti-vaccine fanatics say. Italian courts don't judge vaccines the same way as US courts do, but well, that's why we need a more truthful Google, isn't it?

Google will determine what's true using the Knowledge-Based Trust, which in turn will rely on sites "such as Snopes, PolitiFact and FactCheck.org, [...] websites [who] exist and profit directly from debunking anything and everything [and] have been previously exposed as highly partisan."

Wikipedia will all also be part of the adventure.

What is needed by the intelligence community is an understanding of the constellation of threats to power, and those threats might not be the very useful terrorists of 9/11. What is more problematic is those who can lead masses of people to doubt that 19 novice pilots, alone and undisturbed, could fly planes on the World Trade Center on 9/11, or influential people like Robert F. Kennedy who liken USA's vaccine program to mass child abuse.

These idea, and so many other 'garbage' ideas, are the soil on which organized resistance grows. This aggregate of ideas constitutes a powerful, coherent, attractive frame of reference for large, ever expanding, sections of society.

And this is why Google is such an asset to the NSA (and conversely). Google is in charge of arming the NSA with Truth, which, conjoined with power, will create an all-knowing, all-seeing computer-being. Adding private communications to public webpages, Google will identify what's more crucial to 'debunk'. Adding public webpages to private communications, the NSA will be able to connect the personal to the collective.

And this, obviously, will only be possible through artificial intelligence.

Hassabis and his team [of Google's artificial intelligence program  (Deepmind)] are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services [such as truth delivery?]. It could enhance YouTube recommendations for users for example [...].

But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: 'Eventually, I think human extinction will probably occur, and technology will likely play a part in this.' He adds that he thinks Artifical Intellgience is the 'No.1 risk for this century'. It's ominous stuff. [ You can read more on that here..]

May


help us.