Showing posts with label Perig. Show all posts
Showing posts with label Perig. Show all posts

Monday, 23 January 2017

Particles Dreaming

By Perig Gouanvic
Reposted from Pi alpha

Reflecting on the Double Slit Experiment

What do particles know?

The so-called ‘double-slit experiment’ is a demonstration that light and matter can display characteristics of both classically defined waves and particles. It is also said that it displays the 'fundamentally probabilistic nature' of the universe at the quantum scale.

The original intuition of Thomas Young (back in 1802) was to reproduce the cancellation of water waves, but with light; the double slit was simply used to yield two exactly identical light sources (the same, divided in two). Notice the straight lines that seem to radiate from the source of the water waves: they are made of the cancellation of each other, and are analogous to the dark regions on the five-step picture (below), a true depiction of the impact of electrons in an experiment made by Tanamura.

In the de Broglie–Bohm theory (also called the Bohm interpretation) of quantum physics, the reason why single particles seem to interfere ‘with themselves’, in other words, the reason why, in the double-slit experiment, even single particles ultimately form a figure of interference despite of the fact that they are not emitted as beams but one after the other (see the 5-step process, below), is because each of these particles have a kind of pilot wave which does interfere with itself in some circumstances like the double slit apparatus. The analogy of the sonar helps to explain the phenomenon : picture a dolphin who would have to echolocate through two holes and you get the picture!

Bohm had many analogies for the quantum potential, his revised version of the pilot wave. The sonar is one of them. The information given by the surroundings guides the dolphin, it is called 'active information'

However, what this analogy leaves unattended is the fact that particles do not "send" signals to the surrounding and do not "wait" for this signal to bounce back. Another analogy far remote from the sonar one, was given by Bohm : each particle is like a piece of an hologram, each contains information about the whole, but each is concretised in a specific context.

The 'echolocation' process would be more like a pulsation between the particle as a located entity and the particle as one concretion of the whole. Pulsating infinitely rapidly between being-discrete and being-the-whole, the particle would be more like a process taking the form of an object.
What kind of "thing" can be everything half of the time and something the rest of the time?

Humans, for starters. We, as particles, tend to forget that we also are the whole, each night. We dream.

Tuesday, 26 May 2015

How Google and the NSA are creating a Jealous God

Posted by Pierre-Alain (Perig) Gouanvic




Before PRISM was ever dreamed of, under orders from the Bush White House the NSA was already aiming to “collect it all, sniff it all, know it all, process it all, exploit it all.” During the same period, Google—whose publicly declared corporate mission is to collect and “organize the world’s information and make it universally accessible and useful”was accepting NSA money to the tune of $2 million to provide the agency with search tools for its rapidly accreting hoard of stolen knowledge.
-- Julian Assange, Google Is Not What It Seems

Who is going to process the unthinkable amount of data that's being collected by the NSA and its allies? For now, it seems that the volume of stored data is so enormous that it borders on the absurd.
We know that if someone in the NSA puts a person on notice, his or her record will be retrieved and future actions will be closely monitored (CITIZENFOUR). But who is going to decide who is on notice?

And persons are only significant "threats" if they are related to other persons, to groups, to ideas.

Google, who enjoyed a close proximity with power for the last decade, has now decided to differenciate Good and Bad ideas. Or, in the terms of the New Scientist, truthful content and garbage.
The internet is stuffed with garbage. Anti-vaccination websites make the front page of Google, and fact-free "news" stories spread like wildfire. Google has devised a fix – rank websites according to their truthfulness.
Google's search engine currently uses the number of incoming links to a web page as a proxy for quality, determining where it appears in search results. So pages that many other sites link to are ranked higher. This system has brought us the search engine as we know it today, but the downside is that websites full of misinformation can rise up the rankings, if enough people link to them.
Of course, it is not because vaccine manufacturers are exonerated from liability by the US vaccine court that they are necessarily doing those things that anti-vaccine fanatics say. Italian courts don't judge vaccines the same way as US courts do, but well, that's why we need a more truthful Google, isn't it?

Google will determine what's true using the Knowledge-Based Trust, which in turn will rely on sites "such as Snopes, PolitiFact and FactCheck.org, [...] websites [who] exist and profit directly from debunking anything and everything [and] have been previously exposed as highly partisan."

Wikipedia will all also be part of the adventure.

What is needed by the intelligence community is an understanding of the constellation of threats to power, and those threats might not be the very useful terrorists of 9/11. What is more problematic is those who can lead masses of people to doubt that 19 novice pilots, alone and undisturbed, could fly planes on the World Trade Center on 9/11, or influential people like Robert F. Kennedy who liken USA's vaccine program to mass child abuse.

These idea, and so many other 'garbage' ideas, are the soil on which organized resistance grows. This aggregate of ideas constitutes a powerful, coherent, attractive frame of reference for large, ever expanding, sections of society.

And this is why Google is such an asset to the NSA (and conversely). Google is in charge of arming the NSA with Truth, which, conjoined with power, will create an all-knowing, all-seeing computer-being. Adding private communications to public webpages, Google will identify what's more crucial to 'debunk'. Adding public webpages to private communications, the NSA will be able to connect the personal to the collective.

And this, obviously, will only be possible through artificial intelligence.

Hassabis and his team [of Google's artificial intelligence program  (Deepmind)] are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services [such as truth delivery?]. It could enhance YouTube recommendations for users for example [...].

But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: 'Eventually, I think human extinction will probably occur, and technology will likely play a part in this.' He adds that he thinks Artifical Intellgience is the 'No.1 risk for this century'. It's ominous stuff. [ You can read more on that here..]

May


help us.