Let’s compare brains…

Mind Workstation is without doubt the most useful tool for experimenters. In the course of my strange course, I have spent many hours staring at oscilloscope signals – from transistor radios to marine VHF, hi-fi to ultrasonic measuring instruments, pocket calculators to mainframe computers. I have very little difficulty visualizing a pattern of sound waves and flashes, in fact, it’s the kind of thing I use instead of counting sheep to amuse my head if it won’t go to sleep. With Mind Workstation I can usually create a pattern I’ve visualized in a matter of minutes.

I’ve mentioned my impression that sound interacts with visuals before. Over the last few days I’ve been refining a session to highlight the effect and isolate influences.

A sidebar here – in the process of setting this session up on my Acer laptop, I noticed that there was a huge amount of crosstalk between channels. This surprised me, because I’ve already blogged here about the importance of turning off soundcard Dolby and environmental effects, and I know perfectly well that this had all been dealt with on this laptop. Into the Acer HDSound Setup via control panel, and large as life, Dolby was on again. I’m going to have to find out in what strange hole this default behaviour is hidden, but the Acer resets to Dolby On with every restart. Lesson – check the Mind Workstation or Neuroprogrammer Headphone Testing Tool often!!!

Right, a session to highlight the influence of sound on visuals.

You can do everything required with Neuroprogrammer, but I’ve done it with Mind Workstation.

What we’re aiming to do is provide a complex visual pattern that will reveal any interference between the auditory centres and the visual centres. To do this we’ll prepare two entrainment tracks, two Audiostrobe tracks and one sound track. (In NP2 there’s no entrainment tracks – the entrainment rate is set directly in the AS tracks). Entrainment links in MWS are set so one entrainment track controls each AS track.

I’ve found the best visuals occur in the 8-18Hz range. I decided on a high alpha/SMR session and set one entrainment track (AS track) to 11Hz and the other to 13Hz. My thinking was that 11 and 13, being prime are the only factors of their product, so the two waveforms will only be in phase once every 143 cycles, or about 14 seconds. Longer periods could have been achieved with any other product of 11, 13, 17 , 19, etc. With one AS track panned hard left, and the other panned hard right, each frequency will control one color (frame with monocolor glasses) and interference effects will be seen as the phase relationship changes. This will create a recognisable, slowly changing background for the audio effect I hope your brain creates too. The way it works is very different depending on whether you’re using color-mapped or frame-mapped glasses – check back on the Left Eye/Right Eye post here somewhere.

First stage you’ll almost certainly see changes in the visual pattern that correspond to aspects of the audio. I did my testing with a Procyon set to red/green AS, and a sound track consisting of a few Nihilist tracks (including my all-time forever favourite track, Sunbeam from Fornax 4). These tracks contain quite a lot of high frequency components – hi-hats, shakers, noise synthesis, etc. and all that’s required to trigger Audiostrobe is a bit of audio somewhere close to 19.2kHz. Audiostrobe uses left and right audio channels, so acoustic positioning from the audio track will influence red or green channels (left/right frames). The occasional (or frequent?) flickers from the audio interact with the complex flash pattern to make very conspicuous changes to the visual imagery.

The next trick comes when we isolate the sound from the Audiostrobe tracks. In MWS this is done by applying a lowpass filter with a cutoff around 15kHz, to ensure that there’s no sound around the 19.2kHz AS control frequency. In NP2 it’s done by tweaking Low-Pass Intensity under Customize Volume/Intensities under Customize until the audio no longer triggers Audiostrobe (mute the AS tracks in MWS, I can’t think of a quick way to check with NP2).

I placed the tracks from Nihilist to start a couple of minutes into the session and have a few minutes between them so that I could accustom myself to the AS visuals in silence. Even with the audio having no external effect on the LED control, conspicuous, nay, dramatic changes to the visuals occurred.

For myself, this represents the easiest way to create an Audiostrobe accompaniment to self-chosen music. Shorter pattern periods could be used with faster changing tracks, longer periods with less structured pieces. With the music volume comfortably high, the rapid synchronous changes to the imagery with the music are as good as many I’ve seen in Audiostrobe CDs, where I know the lightshow has been designed to accompany the music. What I find truly remarkable is that what I’m seeing is completely uncontrived – it is a natural visual manifestation of the music itself and the complexity eats any automatic sound-to-light conversion I’ve seen for breakfast.

What I don’t know, of course, is whether anyone else’s brain is wired up to behave like mine. What I do know is that your visual system is pretty much identical to mine, and that brings me to the necessity to explain how I understand what I’m seeing.

The question of where to look during a session keeps coming up. The answer is to forget your eyes. Although your eyes do some initial image processing, when they’re presented with a diffuse image of a featureless flashing light, you might as well consider them nothing more than lightpipes to the visual cortex. When you are “looking” a whole bunch of automatic processing is enabled. Your perceived field of vision changes. For a start, remember that when your eyes are closed, you are not looking at an image projected on your retina – the one air/optic surface of the eye is in immediate contact with the inside of the eyelid – no optics here! What you have is a hemispherical surface bathed in somewhat even diffuse light. Interesting to think about that hemisphere – isn’t it amazing that we can recognised straight lines.

So, whatever it is that you think you see, it’s been fabricated somewhere in the visual cortex. I’m convinced that it’s no sooner in the visual system, because microelectrode EEG studies of monkey brains have shown strong one-to-one mapping of patterns onto the cortex when the monkey views patterned targets. Based on this assumption/deduction/guess, one of the things I do once I’m settled into a session is “switch” from forward looking predator mode into full field perception of the visual representation of the outside world as formed in my visual cortex. In practice, I imagine that the inner “I” looks towards the back of my head to see what is in front of my face.

A further application of these ideas is in considering the classic OOBE experience – seeing yourself lying below. I have had this repeatedly now, and I’ve come to recognise it as me “looking at” my internal representation of myself and my surroundings. Likewise, in lucid dreaming, my point of view shifts from first to third person constantly, as I “see” my internal representation and then “be” my internal representation.

So, give it a go and let me know how you get on. It will put a lot of what I’ve already said into a whole new light if it turns out that nobody sees what I see!

Cheers,
Craig

Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: