Audio Visualiser
Audio Visualiser
Audio Visualiser
Audio Visualiser

Real-time audio visualization with Processing, hype liberary and D3.js

Real-time audio visualization with Processing, hype liberary and D3.js

Real-time audio visualization with Processing, hype liberary and D3.js

Real-time audio visualization with Processing, hype liberary and D3.js

Created a Generative audio visualiser using processing to produce generative animation using color (dynamic), texture (sprites) and sound (as input).

This picture is of Yan my brother from another mother AKA the Alchemist a entrepreneur owner of speciality coffee shop and makes the best coffee in the world (couldn't have done this without amazing coffees); in the centre we have the legend himself Joshua Davis the master of generative art and a wizard of design, creativity and inspiration for everyone and on the right its me a very passionate, fun loving interaction designer Gaurav Jaikish

During my master's degree at Harbour Space Barcelona, Spain, I had the privilege of learning from Joshua Davis, a legendary generative artist. His work using Processing and code to create visualizers and digital art is unparalleled. He also created the HYPE library, which makes it easier to create visuals with code.

In a 3-week course with Davis, I learned a lot about generative art, from its fundamentals to how to bring visuals into life with sound as input. His personality and teaching inspired me greatly.

For my final work, I mainly used two tools: Midjourney, a generative AI program and Processing with the HYPE library. I used Midjourney to create the artwork, then used Processing and HYPE to bring it to life using an uncompressed audio track as input.

year

2022

timeframe

5 days

tools

Processing & Hype library

category

Generative sound visualizer

Created a Generative audio visualiser using processing to produce generative animation using color (dynamic), texture (sprites) and sound (as input).

This picture is of Yan my brother from another mother AKA the Alchemist a entrepreneur owner of speciality coffee shop and makes the best coffee in the world (couldn't have done this without amazing coffees); in the centre we have the legend himself Joshua Davis the master of generative art and a wizard of design, creativity and inspiration for everyone and on the right its me a very passionate, fun loving interaction designer Gaurav Jaikish

During my master's degree at Harbour Space Barcelona, Spain, I had the privilege of learning from Joshua Davis, a legendary generative artist. His work using Processing and code to create visualizers and digital art is unparalleled. He also created the HYPE library, which makes it easier to create visuals with code.

In a 3-week course with Davis, I learned a lot about generative art, from its fundamentals to how to bring visuals into life with sound as input. His personality and teaching inspired me greatly.

For my final work, I mainly used two tools: Midjourney, a generative AI program and Processing with the HYPE library. I used Midjourney to create the artwork, then used Processing and HYPE to bring it to life using an uncompressed audio track as input.

year

2022

timeframe

5 days

tools

Processing & Hype library

category

Generative sound visualizer

Created a Generative audio visualiser using processing to produce generative animation using color (dynamic), texture (sprites) and sound (as input).

This picture is of Yan my brother from another mother AKA the Alchemist a entrepreneur owner of speciality coffee shop and makes the best coffee in the world (couldn't have done this without amazing coffees); in the centre we have the legend himself Joshua Davis the master of generative art and a wizard of design, creativity and inspiration for everyone and on the right its me a very passionate, fun loving interaction designer Gaurav Jaikish

During my master's degree at Harbour Space Barcelona, Spain, I had the privilege of learning from Joshua Davis, a legendary generative artist. His work using Processing and code to create visualizers and digital art is unparalleled. He also created the HYPE library, which makes it easier to create visuals with code.

In a 3-week course with Davis, I learned a lot about generative art, from its fundamentals to how to bring visuals into life with sound as input. His personality and teaching inspired me greatly.

For my final work, I mainly used two tools: Midjourney, a generative AI program and Processing with the HYPE library. I used Midjourney to create the artwork, then used Processing and HYPE to bring it to life using an uncompressed audio track as input.

year

2022

timeframe

5 days

tools

Processing & Hype library

category

Generative sound visualizer

Created a Generative audio visualiser using processing to produce generative animation using color (dynamic), texture (sprites) and sound (as input).

This picture is of Yan my brother from another mother AKA the Alchemist a entrepreneur owner of speciality coffee shop and makes the best coffee in the world (couldn't have done this without amazing coffees); in the centre we have the legend himself Joshua Davis the master of generative art and a wizard of design, creativity and inspiration for everyone and on the right its me a very passionate, fun loving interaction designer Gaurav Jaikish

During my master's degree at Harbour Space Barcelona, Spain, I had the privilege of learning from Joshua Davis, a legendary generative artist. His work using Processing and code to create visualizers and digital art is unparalleled. He also created the HYPE library, which makes it easier to create visuals with code.

In a 3-week course with Davis, I learned a lot about generative art, from its fundamentals to how to bring visuals into life with sound as input. His personality and teaching inspired me greatly.

For my final work, I mainly used two tools: Midjourney, a generative AI program and Processing with the HYPE library. I used Midjourney to create the artwork, then used Processing and HYPE to bring it to life using an uncompressed audio track as input.

year

2022

timeframe

5 days

tools

Processing & Hype library

category

Generative sound visualizer

Click this to check out the live website 👉

Click this to check out the live website 👉

Click this to check out the live website 👉

Click this to check out the live website 👉

01

Staking-Flow-[screen 01]-(light)

Staking-Flow-[screen 01]-(dark)

01

Staking-Flow-[screen 01]-(light)

Staking-Flow-[screen 01]-(dark)

01

Staking-Flow-[screen 01]-(light)

Staking-Flow-[screen 01]-(dark)

01

Video of the Audio Visualizer I made

(for best experience watch in fullscreen and turn up the resolution to 1080p)

02

few screenshots of the audio visualizer I made

03

import hype.*;
import hype.extended.behavior.HOscillator;
import hype.extended.layout.HSphereLayout; 

int         stageW         = 1920;
int         stageH         = 1080;
color       clrBG          = #242424;
String      pathAssets     = "../../../assets/";

// ************************************************************************************

// THIS IS THE AUDIO VARS

import ddf.minim.*;
import ddf.minim.analysis.*;

Minim       minim;
AudioPlayer myAudioPlayer;
String      whichAudioFile = "Mandragora - Mind Reconfiguration Program.wav";
//String      whichAudioFile = "Audio2-16.wav";
//String      whichAudioFile = "Audio3-16.mp3";
AudioInput  myAudioInput;
boolean     myAudioToggle  = true; // true = myAudioPlayer / false = myAudioInput
boolean     showVisualizer = false;

FFT         myAudioFFT;

int         myAudioRange   = 16; // 256 / 128(2) / 64(4) / 32(8) / 16(16)
int         myAudioMax     = 100;

float       myAudioAmp;
float       myAudioIndex;
float       myAudioIndexAmp;
float       myAudioIndexStep;

boolean     useTimeCodes     = true;
//                              1    2      3      4      5      6       7          8        9       10      11      12      13     14      15       16      17      18      19     20                 
int[]       timeCode         = {0, 14930, 20000, 30000, 50000, 80000,  100000,  160000,  190000,  200000,  240000 , 300000 , 350000 , 400000 , 450000 , 500000,  550000,  600000,  700000,  };
int         timeCodeLength   = timeCode.length;
int         timeCodePosition = 0;
int         timeCodeFuture   = timeCodePosition+1;

float[]     myAudioData    = new float[myAudioRange]; // KEEP A RECORD OF ALL THE NUMBERS IN AN ARRAY

// ************************************************************************************

// THIS ALL THE WORKING WITH COLOR STUFF

String      whichImg       = pathAssets + "rainbow.png";
PImage      clrs;
int         clrsW;
float       clrCount;
float       clrSpeed       = 0.02; // the speed of the color change
float       clrOffset      = 0.0025; // the distance from each of the squares getting colored

// ************************************************************************************

// VARS TO RENDER SOME IMAGES

boolean     letsRender     = false; // RENDER YES OR NO
int         renderModulo   = 100;    // RENDER AN IMAGE ON WHAT TEMPO ?
int         renderNum      = 0;     // FIRST IMAGE
int         renderMax      = 20;    // HOW MANY IMAGES
String      renderPATH     = "../renders_001/";

// ************************************************************************************

// LOAD IN A TEXTURES TO MAP TO A SPRITE

// back / left / right / top / bottom / front

String[]    texNames       = { "01.png", "02.png", "03.png", "04.png", "05.png", "06.png", "07.png", "08.png", "09.png", "10.png", "11.png", "12.png", "12.png", "13.png", "14.png", "15.png", "16.png", "17.png", "18.png", "19.png", "20.png" };
int         texNamesLen    = texNames.length;
PImage[]    tex            = new PImage[texNamesLen];

// *********************************************************************************************

int         numAssets      = 318; // 100 // 318

int         layoutRadius   = 2000;
int         layoutStartX   = 0;
int         layoutStartY   = 0;
int         layoutStartZ   = 500;

HSphereLayout layout;

PVector[]   pos            = new PVector[numAssets];

// *********************************************************************************************

HOscillator masterRX, masterRY, masterRZ, masterP;

HOscillator[] oscRX        = new HOscillator[numAssets];
HOscillator[] oscRY        = new HOscillator[numAssets];
HOscillator[] oscRZ        = new HOscillator[numAssets];
HOscillator[] oscS         = new HOscillator[numAssets];

// *********************************************************************************************

int[]       myPickedAudio  = new int[numAssets];

// *********************************************************************************************

void settings() {
	size(stageW, stageH, P3D);
	fullScreen();
}

void setup() {
	H.init(this);
	background(clrBG);
	audioSetup();

	clrs = loadImage(whichImg);
	clrsW = clrs.width-1;

	// LOAD THE TEXTURES
	for (int i = 0; i < texNamesLen; ++i) {
		tex[i] = loadImage(pathAssets + texNames[i]);
	}
	textureMode(NORMAL);

	// BUILD THE SPHERE and OSC
	layout = new HSphereLayout().loc(layoutStartX,layoutStartY,layoutStartZ).radius(layoutRadius).ignorePoles().offsetRows(true);

	layout.useSpiral()  // tells layout to use the Fibonacci spiral layout calculations
		.numPoints(numAssets) // how many points to plot on the sphere. This can be the same number as objects in your pool
	;

	for (int i = 0; i < numAssets; ++i) {
		pos[i] = layout.getNextPoint();
		myPickedAudio[i] = (int)random(16);

		oscRX[i] = new HOscillator().range(-180, 180).speed(1).freq(1).currentStep(i*3).waveform(H.SINE);
		oscRY[i] = new HOscillator().range(-180, 180).speed(1).freq(1).currentStep(i*3).waveform(H.SINE);
		oscRZ[i] = new HOscillator().range(-180, 180).speed(1).freq(1).currentStep(i*3).waveform(H.SINE);
		oscS[i]  = new HOscillator().range(10, 500).speed(1).freq(10).currentStep(i).waveform(H.SINE);
	}

	masterRX = new HOscillator().range(-180, 180).speed(0.1).freq(0.9).waveform(H.SINE);
	masterRY = new HOscillator().range(-180, 180).speed(0.1).freq(0.8).waveform(H.SINE);
	masterRZ = new HOscillator().range(-180, 180).speed(0.1).freq(0.7).waveform(H.SINE);
	masterP = new HOscillator().range(1.2, 3.0).speed(0.1).freq(5).waveform(H.SINE);
}

void draw() {
	// background( clrBG );

// ************************************************************************************

	float _MRX = map(myAudioData[ myPickedAudio[2] ], 0, myAudioMax, 0.05, 0.75);
	masterRX.speed(_MRX);
	masterRX.nextRaw();

	float _MRY = map(myAudioData[ myPickedAudio[4] ], 0, myAudioMax, 0.05, 0.75);
	masterRY.speed(_MRY);
	masterRY.nextRaw();

	float _MRZ = map(myAudioData[ myPickedAudio[6] ], 0, myAudioMax, 0.05, 0.75);
	masterRZ.speed(_MRZ);
	masterRZ.nextRaw();

	float _MS = map(myAudioData[ myPickedAudio[0] ], 0, myAudioMax, 0.05, 0.5);
	masterP.speed(_MS);
	masterP.nextRaw();

	//blendMode(LIGHTEST);
	//blendMode(OVERLAY);
	//blendMode(SUBTRACT);
    //blendMode(BURN);
    //blendMode(DODGE);
    //blendMode(SOFT_LIGHT);
    //blendMode(HARD_LIGHT);
    //blendMode(OVERLAY);
    //blendMode(SCREEN);
    //blendMode(MULTIPLY);
   //blendMode(EXCLUSION);
    //blendMode(DIFFERENCE);



	push();
		translate(stageW/2, stageH/2, 0);

		perspective(PI/masterP.curr(), (float)(width*2)/(height*2), 0.5, 1000000);

		rotateX(radians(masterRX.curr()));
		rotateY(radians(masterRY.curr()));
		rotateZ(radians(masterRZ.curr()));




				switch (timeCodePosition) 
				{
					case 0 :
						blendMode(SCREEN);
					break;

					case 1 :
						blendMode(LIGHTEST);
					break;

					case 2 :
						blendMode(LIGHTEST);
					break;

					case 3 :
						blendMode(DIFFERENCE);
					break;

					case 4 :
						blendMode(LIGHTEST);
					break;

					case 5 :
						blendMode(SCREEN);
					break;

					case 6 :
						blendMode(SCREEN);
					break;

					case 7 :
						blendMode(SCREEN);
					break;

					case 8 :
						blendMode(LIGHTEST);
					break;

					case 9 :
						blendMode(LIGHTEST);
					break;

					case 10 :
						blendMode(DIFFERENCE);
					break;

					case 11 :
						blendMode(LIGHTEST);
					break;

					case 12 :
						blendMode(SCREEN);
					break;

					case 13 :
						blendMode(SCREEN);
					break;

					case 14 :
						blendMode(SCREEN);
					break;

					case 15 :
						blendMode(SCREEN);
					break;

					case 16 :
						blendMode(SCREEN);
					break;

					case 17 :
						blendMode(SCREEN);
					break;

					case 18 :
						blendMode(SCREEN);
					break;

					case 19 :
						blendMode(SCREEN);
					break;

					case 20 :
						blendMode(SCREEN);
					break;





				}


		for (int i = 0; i < numAssets; ++i) {
			HOscillator _oscRX = oscRX[i];
			float _aRX = map(myAudioData[ myPickedAudio[i] ], 0, myAudioMax, 0.0, 0.75);
			_oscRX.speed(_aRX);
			_oscRX.nextRaw(); 

			HOscillator _oscRY = oscRY[i];
			float _aRY = map(myAudioData[ myPickedAudio[i] ], 0, myAudioMax, 0.0, 0.5);
			_oscRY.speed(_aRY);
			_oscRY.nextRaw(); 

			HOscillator _oscRZ = oscRZ[i];
			float _aRZ = map(myAudioData[ myPickedAudio[i] ], 0, myAudioMax, 0.0, 0.25);
			_oscRZ.speed(_aRZ);
			_oscRZ.nextRaw(); 

			HOscillator _oscS = oscS[i];
			float _aS = map(myAudioData[ myPickedAudio[i] ], 0, myAudioMax, 0.0, 1.0);
			_oscS.speed(_aS);
			_oscS.nextRaw(); 

			push();
				translate(pos[i].x, pos[i].y, pos[i].z );
				scale(_oscS.curr());

				rotateX(radians(_oscRX.curr()));
				rotateY(radians(_oscRY.curr()));
				rotateZ(radians(_oscRZ.curr()));

				float wave = sin( clrCount+(i*clrOffset) );
				float waveMap = map(wave, -1, 1, 0, clrsW);
				tint( clrs.get((int)waveMap,0), 255 );


				
				switch (timeCodePosition) {
					case 0 :
						buildCube(tex[0]);
					break;

					case 1 :
						buildCube(tex[1]);
					break;

					case 2 :
						buildCube(tex[2]);
					break;

					case 3 :
						buildCube(tex[3]);
					break;

					case 4 :
						buildCube(tex[4]);
					break;

					case 5 :
						buildCube(tex[5]);
					break;

					case 6 :
						buildCube(tex[6]);
					break;

					case 7 :
						buildCube(tex[7]);
					break;

					case 8 :
						buildCube(tex[8]);
					break;

					case 9 :
						buildCube(tex[9]);
					break;

					case 10 :
						buildCube(tex[10]);
					break;


					case 11:
						buildCube(tex[11]);
					break;


					case 12 :
						buildCube(tex[12]);
					break;


					case 13 :
						buildCube(tex[13]);
					break;


					case 14 :
						buildCube(tex[14]);
					break;


					case 15 :
						buildCube(tex[15]);
					break;


					case 16 :
						buildCube(tex[16]);
					break;


					case 17 :
						buildCube(tex[17]);
					break;


					case 18 :
						buildCube(tex[18]);
					break;


					case 19 :
						buildCube(tex[19]);
					break;


					case 20 :
						buildCube(tex[20]);
					break;



				}


			pop();
		}
	pop();

	blendMode(BLEND);

// ************************************************************************************

	perspective(PI/3.0, (float)(width*2)/(height*2), 0.5, 1000000);

	strokeWeight(0);
	noStroke();
	fill(0, 15);
	rect(0, 0, width, height);

	noLights();
	audioUpdate();
	clrCount += clrSpeed;

	if(frameCount%(renderModulo)==0 && letsRender) {
		save(renderPATH + renderNum + ".png");
		renderNum++;
		if(renderNum>=renderMax) exit();
	}
}

// ************************************************************************************

void keyPressed() {
	switch (key) {
		case '1': if(!myAudioToggle){myAudioInput.close();} myAudioToggle = true;  minim.stop(); audioSetup(); break; // audioPlayer
		case '2': if(myAudioToggle){myAudioPlayer.close();} myAudioToggle = false; minim.stop(); audioSetup(); break; // audioInput

		case 's': myAudioPlayer.pause();  break;
		case 'p': myAudioPlayer.play();   break;
		case 'm': myAudioPlayer.mute();   break;
		case 'u': myAudioPlayer.unmute(); break;

		case 'v': showVisualizer = !showVisualizer; break;
	}
}

Screenshots of the Audio Visualizer

.say hello

i'm open for freelance projects, feel free to email me to see, how we can collaborate

.say hello

i'm open for freelance projects, feel free to email me to see, how we can collaborate

.say hello

i'm open for freelance projects, feel free to email me to see, how we can collaborate

.say hello

i'm open for freelance projects, feel free to email me to see, how we can collaborate