An experiment: hold out your right arm and then slowly flex your wrist.

Which comes first – the thought to flex your wrist, or the action itself?

In the 1980s, an experiment by the neurologist Benjamin Libet raised some disturbing questions about the extent to which we can say we are conscious before a decision like this.

In his lab, Libet hooked a group of people up to an EEG machine, measuring brain activity via electrodes on their scalp, and making use of a precise timer, he asked the volunteers to record the moment they became aware of the urge to move their wrist, discovering an average delay of 200 milliseconds before the action was taken.

But the recordings also revealed another signal – 550 milliseconds earlier – that suggested the brain was actually ready to act long before we are conscious of it.

I’ve been thinking about this experiment recently in the context of technology.

I’m convinced we will soon be able to give artificial intelligence access to our intimate thoughts and impulses, allowing biosensors to relay useful data to networks where they will be parsed and processed with millisecond delays.

We will be presented with a branching tree of decisions about almost every routine in our day – what to eat, what to listen to, to whom we speak, the small lessons we might learn to incrementally adapt our lifestyle.

In fact, in recent weeks I’ve been digging into research on technology that directly manipulates the brain.

And there is some fascinating (and disturbing) stuff going on.

I wanted to give you three examples today.

1. Reading your emotions

One of the most interesting areas of brain manipulation right now is in “affective computing”.

These are companies developing tools that are tracking our emotions.

Affectiva, an emotion AI company based in Boston, specialises in advertising research.

Their technology captures visceral, subconscious reactions, which correlate very strongly with actual consumer behavior, like sharing the ad or actually buying the product.

With a customer’s consent, the technology uses the person’s phone or laptop camera to capture their reactions while watching a particular advertisement.

They then track your mood and behaviour.

It could be your facial expression.

Or the way your eyes dilate.

By studying the way your face changes over the course of just a few minutes, the company can develop powerful evidence about how interested you are and what you are likely to do next.

As Affectiva’s Gabi Zijderveld told the MIT Technology Review recently:

“I think as of this month we’ve probably tested more than 40,000 ads in 87 countries, and we’ve analysed more than seven and a half million faces.

And that’s enabled us to build a product that can also help these advertisers predict key performance indicators in advertising. And that emotion data can actually help them predict the likelihood of content to go viral, or purchase intent or sales lift.”

Start-up Empath has also been developing AI that can detect emotions through voice analysis over phone.

By analysing physical properties such as speed, volume, pitch, and tone instead of the language itself, they are developing high fidelity readings of emotions such as joy, anger, calm and sadness.

And companies can use these to check the mood of their employees.

Or to develop apps that appeal to the mood of the customer.

It’s early days.

But AI is already getting very good at reading and responding to our voices.

There is huge progress in face recognition.

And I don’t think it will be long before artificial intelligence becomes even more invasive…

2. Controlling your impulses

Mark Zuckerberg, for one, speaks of an almost telepathic means of communication by 2030, “we are going to be able to communicate our full sensory experience to someone through thought via head wear than can scan our brains and then transmit our thoughts to our friends much as we share baby pictures on Facebook today.”

There is already evidence that we can actively control impulses.

CTRL Labs have demonstrated an electrode-studded wristband that translates mental activity from manipulating impulses in the hand.

Let’s say you have a thought: I’m going to wave my hand.

The wristband can pick up on the signal in the brain before you are conscious of it.

And it can use that signal to send commands to a computer.

The CTRL-kit uses Electromyography, reading bursts of electrical activity as they emerge from your motor cortex.

The company’s aim is to create computers that work as natural extensions of your thoughts and gestures.

Think of the computers in Steven Spielberg’s Minority Report: a computer actively responding to your thoughts and gestures.

You can view a quick demonstration here.

Meanwhile, Nissan has been working on “brain-to-vehicle” technology.

In the trial runs I’ve seen, the driver wears a headset that reads electrical activity in the brain: reaction times, alertness, even emotional states.

By recognising when you are about to brake or swerve, you can shave between 0.2 and 0.5 seconds off your reaction time.

The drivers do experience a slight discomfort as control shifts from human to the machine.

But they seem to get over it quickly.

3. Editing your IQ

Then there’s the controversial business of stimulating the brain by editing our genes.

Recently Professor He Jiankui made news by announcing that he used a gene-editing technology, CRISPR, to alter two embryos so that they would be immune from HIV infection later in life.

Since then it has emerged that he may have also “accidentally” improved a number of their cognitive functions.

In fact, we have been using CRISPR technology for some time to stimulate activity and function in the brains of mice.

Just this week researchers announced that they have revived the disembodied brains of pigs four hours after the animals were slaughtered.

You are going to see a lot of stories like this over the next year.

The Chinese authorities are very serious about trying to learn how to boost the intelligence of their population.

Nick Bostrom, founding director of the Future of Humanity Institute at Oxford University, believes that genetic editing will be key to this.

By using genetic screening of embryos before implantation, parents could find and select the embryos that possess “turned-on” alleles for high intelligence, says Bostrom.

He reckons that selecting embryos after sequencing, coupled with embryonic genetic manipulation, could enhance IQs by 60 points.

We can imagine a shopping list of new capacities: super-fast thinking and calculation, powerful visualisation, the ability to execute multiple analyses or trains of thought in parallel at the same time.

The last 200 milliseconds

But won’t the brains revolt from all this manipulation?

In the wake of controversy over his experiments, Benjamin Libet spoke of a veto that is available for 200 milliseconds before we decide to act.

He even presented a useful decision timeline in Mind Time: The Temporal Factor in Consciousness

In the next decade, as machine learning algorithms become increasingly independent and sophisticated, constantly fed with data from sensors and processed without delay, we will begin to recognise a tension in our decisions between our conscious impulse — at 200 milliseconds — and the decision we finally take.

I think this delay will get shorter and shorter as we recognise how useful this information is, trusting the AI, inviting it into our domestic routines.