Search Results

Search found 42 results on 2 pages for 'amplitude'.

Page 1/2 | 1 2  | Next Page >

  • graphing amplitude

    - by John
    I was wondering if someone could point me to a good tutorial or show me how to graph the amplitude from a bytearray. The audio format I am using is: ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame.

    Read the article

  • Difference in amplitude from the same source using FFT

    - by Anders
    Hi, I have a question regarding use of FFT. Using function getBand(int i) with Minim i can extract the amplitude of a specific frequency and do pretty maps of it. Works great. However, this is a more of a curiosity question. When i look at the values extracted from playing the same song two twice using the same frequency (so the amplitude should be identical) but i get very different values - why is this? 0.0,0.0,0.0,0.0,0.0,0.08706585,0.23708777,0.83046436,0.74603105,0.30447206 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.08706585,0.4790409,0.9608221,0.83046436,0.74603105

    Read the article

  • Signal amplitude against time in Java

    - by wsr74ws84
    I'm racking my brain in order to solve a knotty problem (at least for me). While playing an audio file (using Java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope (spectrum analyzer). The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time). Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know very little about Java, I hope someone can help me.

    Read the article

  • Signal amplitude against time (java)

    - by wsr74ws84
    Hi everyone , I'm racking my brain in order to solve a knotty problem (at least for me) While playing an audio file (using java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope .(SPECTRUM ANALYZER) The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time) Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know vwry little about java , I wish someone could help me . Thanks in advance.

    Read the article

  • Normalize amplitude and phase with c#

    - by Lehto
    Hey I'm in the situation where i need to do some math related stuff in c# and for that i need some external libarys. The tool i look for should do the following actions: Process sound(wave/mp3): Normalize the amplitude Normalize the phase Any idea which way to go? And is there a big difference if a should to it on mp3 instead of wav Michael.

    Read the article

  • After Effect - Triangle Wave (JavaScript)

    - by PJ Palomaki
    Hi, I'm trying to control a parameter with AE expressions and I need a triangle wave. I've got this so far: freq = 20; amplitude = 8; m = amplitude; i = time*freq; m - (i % (2*m) - m); Unfortunately this gives a saw wave (see below) and my math's a bit rusty, any takers? Thanks! PJ http://i25.photobucket.com/albums/c73/pjburnhill/Screenshot2010-04-09at153815.png

    Read the article

  • Graphing the pitch (frequency) of a sound

    - by Coronatus
    I want to plot the pitch of a sound into a graph. Currently I can plot the amplitude. The graph below is created by the data returned by getUnscaledAmplitude(): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new BufferedInputStream(new FileInputStream(file))); byte[] bytes = new byte[(int) (audioInputStream.getFrameLength()) * (audioInputStream.getFormat().getFrameSize())]; audioInputStream.read(bytes); // Get amplitude values for each audio channel in an array. graphData = type.getUnscaledAmplitude(bytes, this); public int[][] getUnscaledAmplitude(byte[] eightBitByteArray, AudioInfo audioInfo) { int[][] toReturn = new int[audioInfo.getNumberOfChannels()][eightBitByteArray.length / (2 * audioInfo. getNumberOfChannels())]; int index = 0; for (int audioByte = 0; audioByte < eightBitByteArray.length;) { for (int channel = 0; channel < audioInfo.getNumberOfChannels(); channel++) { // Do the byte to sample conversion. int low = (int) eightBitByteArray[audioByte]; audioByte++; int high = (int) eightBitByteArray[audioByte]; audioByte++; int sample = (high << 8) + (low & 0x00ff); if (sample < audioInfo.sampleMin) { audioInfo.sampleMin = sample; } else if (sample > audioInfo.sampleMax) { audioInfo.sampleMax = sample; } toReturn[channel][index] = sample; } index++; } return toReturn; } But I need to show the audio's pitch, not amplitude. Fast Fourier transform appears to get the pitch, but it needs to know more variables than the raw bytes I have, and is very complex and mathematical. Is there a way I can do this?

    Read the article

  • Blend Interaction Behaviour gives "points to immutable instance" error

    - by kennethkryger
    I have a UserControl that is a base class for other user controls, that are shown in "modal view". I want to have all user controls fading in, when shown and fading out when closed. I also want a behavior, so that the user can move the controls around.My contructor looks like this: var tg = new TransformGroup(); tg.Children.Add(new ScaleTransform()); RenderTransform = tg; var behaviors = Interaction.GetBehaviors(this); behaviors.Add(new TranslateZoomRotateBehavior()); Loaded += ModalDialogBase_Loaded; And the ModalDialogBase_Loaded method looks like this: private void ModalDialogBase_Loaded(object sender, RoutedEventArgs e) { var fadeInStoryboard = (Storyboard)TryFindResource("modalDialogFadeIn"); fadeInStoryboard.Begin(this); } When I press a Close-button on the control this method is called: protected virtual void Close() { var fadeOutStoryboard = (Storyboard)TryFindResource("modalDialogFadeOut"); fadeOutStoryboard = fadeOutStoryboard.Clone(); fadeOutStoryboard.Completed += delegate { RaiseEvent(new RoutedEventArgs(ClosedEvent)); }; fadeOutStoryboard.Begin(this); } The storyboard for fading out look like this: <Storyboard x:Key="modalDialogFadeOut"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleX)" Storyboard.TargetName="{x:Null}"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.4" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleY)" Storyboard.TargetName="{x:Null}"> <EasingDoubleKeyFrame KeyTime="0" Value="1"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> <EasingDoubleKeyFrame KeyTime="0:0:0.4" Value="0"> <EasingDoubleKeyFrame.EasingFunction> <BackEase EasingMode="EaseIn" Amplitude="0.3" /> </EasingDoubleKeyFrame.EasingFunction> </EasingDoubleKeyFrame> </DoubleAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="{x:Null}"> <EasingDoubleKeyFrame KeyTime="0" Value="1" /> <EasingDoubleKeyFrame KeyTime="0:0:0.3" Value="0" /> <EasingDoubleKeyFrame KeyTime="0:0:0.4" Value="0" /> </DoubleAnimationUsingKeyFrames> </Storyboard> If the user control is show, and the user does NOT move it around on the screen, everything works fine. However, if the user DOES move the control around, I get the following error when the modalDialogFadeOut storyboard is started: 'Children' property value in the path '(0).(1)[0].(2)' points to immutable instance of 'System.Windows.Media.TransformCollection'. How can fix this?

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • Autocorrelation method for pitch determination: what is the input data form?

    - by harsh
    I have read a code for pitch determination using autocorrelation method. Can anybody please tell what would be the input data (passed as argument to DetectPitch()) function here: double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; }

    Read the article

  • Autocorrelation method for pitch determination.. whats d input data form..?

    - by harsh
    i hav read a code for pitch determination using autocorrelation method. can anybody please tell wht wud b d input data(passed as argument to DetectPitch()) function here: double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; }

    Read the article

  • Microphone input

    - by George
    I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing. I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level. Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software? Thanks, /George

    Read the article

  • Microphone input

    - by George
    I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing. I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level. Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software? Thanks, /George

    Read the article

  • 802.11ac : le nouveau standard permet d'atteindre des débits de 1,3 Gbps, les premiers équipements certifiés Wi-Fi alliance font leur apparition

    Plus rapide, le nouveau standard sans fil 802.11ac permet d'atteindre des débits de transfert de 1,3 Gbps les premiers équipements certifiés Wi-Fi alliance font leur apparitionAussi connu sous le nom de Wi-Fi 5G, la norme sans fil 802.11ac succède à la norme 802.11n, à laquelle elle ajoute de nombreuses améliorations, notamment au niveau du débit de transfert de données. Son débit théorique maximal est de l'ordre de 1,3 Gbps, soit de quoi transférer un film HD à une tablette dans un espace de temps inférieur à 4 minutes. Ce débit, la norme le doit à trois éléments essentiels, à savoir : l'utilisation de la méthode de modulation QAM (Quadrature Amplitude Modulation), une large panel de canaux de communicat...

    Read the article

  • Creating simple waveforms with CoreAudio

    - by Koning Baard
    I am new to CoreAudio, and I would like to output a simple sine wave and square wave with a given frequency and amplitude through the speakers using CA. I don't want to use sound files as I want to synthesize the sound. What do I need to do this? And can you give me an example or tutorial? Thanks.

    Read the article

  • Calculate area under FFT graph in Matlab

    - by lytheone
    Hiya, Currently I did a FFT of a set of data which gives me a plot with frequency at x axis and amplitude at y axis. I would like to calculate out the area under graph to give me energy. I am not sure how to determinate the area because i am without the equation and also i only want a certain area of the plot rather than whole area under the plot. Is there a way i can do it?

    Read the article

  • How to process audio in real time?

    - by user1756648
    I am giving some audio input through microphone. I recorded it in Audacity, it looks something like as shown below. I want to process this audio in real time. I mainly want to do this. 1) see real time audio amplitude vs time graph 2) perform some actions based on some thing (like if a specific type of hike is seen in audio, then do something, else do something else) Is there any python module or C library that can allow me to do this ?

    Read the article

  • Problem Implementing Texture on Libgdx Mesh of Randomized Terrain

    - by BrotherJack
    I'm having problems understanding how to apply a texture to a non-rectangular object. The following code creates textures such as this: from the debug renderer I think I've got the physical shape of the "earth" correct. However, I don't know how to apply a texture to it. I have a 50x50 pixel image (in the environment constructor as "dirt.png"), that I want to apply to the hills. I have a vague idea that this seems to involve the mesh class and possibly a ShapeRenderer, but the little i'm finding online is just confusing me. Bellow is code from the class that makes and regulates the terrain and the code in a separate file that is supposed to render it (but crashes on the mesh.render() call). Any pointers would be appreciated. public class Environment extends Actor{ Pixmap sky; public Texture groundTexture; Texture skyTexture; double tankypos; //TODO delete, temp public Tank etank; //TODO delete, temp int destructionRes; // how wide is a static pixel private final float viewWidth; private final float viewHeight; private ChainShape terrain; public Texture dirtTexture; private World world; public Mesh terrainMesh; private static final String LOG = Environment.class.getSimpleName(); // Constructor public Environment(Tank tank, FileHandle sfileHandle, float w, float h, int destructionRes) { world = new World(new Vector2(0, -10), true); this.destructionRes = destructionRes; sky = new Pixmap(sfileHandle); viewWidth = w; viewHeight = h; skyTexture = new Texture(sky); terrain = new ChainShape(); genTerrain((int)w, (int)h, 6); Texture tankSprite = new Texture(Gdx.files.internal("TankSpriteBase.png")); Texture turretSprite = new Texture(Gdx.files.internal("TankSpriteTurret.png")); tank = new Tank(0, true, tankSprite, turretSprite); Rectangle tankrect = new Rectangle(300, (int)tankypos, 44, 45); tank.setRect(tankrect); BodyDef terrainDef = new BodyDef(); terrainDef.type = BodyType.StaticBody; terrainDef.position.set(0, 0); Body terrainBody = world.createBody(terrainDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = terrain; terrainBody.createFixture(fixtureDef); BodyDef tankDef = new BodyDef(); Rectangle rect = tank.getRect(); tankDef.type = BodyType.DynamicBody; tankDef.position.set(0,0); tankDef.position.x = rect.x; tankDef.position.y = rect.y; Body tankBody = world.createBody(tankDef); FixtureDef tankFixture = new FixtureDef(); PolygonShape shape = new PolygonShape(); shape.setAsBox(rect.width*WORLD_TO_BOX, rect.height*WORLD_TO_BOX); fixtureDef.shape = shape; dirtTexture = new Texture(Gdx.files.internal("dirt.png")); etank = tank; } private void genTerrain(int w, int h, int hillnessFactor){ int width = w; int height = h; Random rand = new Random(); //min and max bracket the freq's of the sin/cos series //The higher the max the hillier the environment int min = 1; //allocating horizon for screen width Vector2[] horizon = new Vector2[width+2]; horizon[0] = new Vector2(0,0); double[] skyline = new double[width]; //TODO skyline necessary as an array? //ratio of amplitude of screen height to landscape variation double r = (int) 2.0/5.0; //number of terms to be used in sine/cosine series int n = 4; int[] f = new int[n*2]; //calculating omegas for sine series for(int i = 0; i < n*2 ; i ++){ f[i] = rand.nextInt(hillnessFactor - min + 1) + min; } //amp is the amplitude of the series int amp = (int) (r*height); double lastPoint = 0.0; for(int i = 0 ; i < width; i ++){ skyline[i] = 0; for(int j = 0; j < n; j++){ skyline[i] += ( Math.sin( (f[j]*Math.PI*i/height) ) + Math.cos(f[j+n]*Math.PI*i/height) ); } skyline[i] *= amp/(n*2); skyline[i] += (height/2); skyline[i] = (int)skyline[i]; //TODO Possible un-necessary float to int to float conversions tankypos = skyline[i]; horizon[i+1] = new Vector2((float)i, (float)skyline[i]); if(i == width) lastPoint = skyline[i]; } horizon[width+1] = new Vector2(800, (float)lastPoint); terrain.createChain(horizon); terrain.createLoop(horizon); //I have no idea if the following does anything useful :( terrainMesh = new Mesh(true, (width+2)*2, (width+2)*2, new VertexAttribute(Usage.Position, (width+2)*2, "a_position")); float[] vertices = new float[(width+2)*2]; short[] indices = new short[(width+2)*2]; for(int i=0; i < (width+2); i+=2){ vertices[i] = horizon[i].x; vertices[i+1] = horizon[i].y; indices[i] = (short)i; indices[i+1] = (short)(i+1); } terrainMesh.setVertices(vertices); terrainMesh.setIndices(indices); } Here is the code that is (supposed to) render the terrain. @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. backgroundStage.draw(); backgroundStage.act(delta); uistage.draw(); uistage.act(delta); batch.begin(); debugRenderer.render(this.ground.getWorld(), camera.combined); batch.end(); //Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D); ground.dirtTexture.bind(); ground.terrainMesh.render(GL10.GL_TRIANGLE_FAN); //I'm particularly lost on this ground.step(); }

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • Do all the network cards use the same frequency to send signals to wire?

    - by smwikipedia
    I am comparing my cable TV wire to my network wire. In a TV cable wire, different frequencies are used by different TV channels. And since a certain channel use a fixed frequency, I think the only left way to represent different signal is with the carrier wave's amplitude. But what about the network wire? For all the network cards with the same type, do they also use different frequencies to send signals just like TV cable? I vaguely remember that they use frequency adjustment to represent signals. So the frequency should not be a fixed one. So how did all the network cards that sharing the same medium differentiate their own signal from others?

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Extrapolation using fft in octave

    - by CFP
    Using GNU octave, I'm computing a fft over a piece of signal, then eliminating some frequencies, and finally reconstructing the signal. This give me a nice approximation of the signal ; but it doesn't give me a way to extrapolate the data. Suppose basically that I have plotted three periods and a half of f: x -> sin(x) + 0.5*sin(3*x) + 1.2*sin(5*x) and then added a piece of low amplitude, zero-centered random noise. With fft/ifft, I can easily remove most of the noise ; but then how do I extrapolate 3 more periods of my signal data? (other of course that duplicating the signal). The math way is easy : you have a decomposition of your function as an infinite sum of sines/cosines, and you just need to extract a partial sum and apply it anywhere. But I don't quite get the programmatic way... Thanks!

    Read the article

  • Android PCM Bytes

    - by Pintac
    Hi I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic. So thats working nicely. Now i need convert the pcm bytes into decibel. I have a formula that takes sound presure in Pa into db. db = 20 * log10(Pa/ref Pa) So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what. I tried to putting the value into te formula but it comes back with very hight db so i do not think its right thanks

    Read the article

  • Confusion testing fftw3 - poisson equation 2d test

    - by user3699736
    I am having trouble explaining/understanding the following phenomenon: To test fftw3 i am using the 2d poisson test case: laplacian(f(x,y)) = - g(x,y) with periodic boundary conditions. After applying the fourier transform to the equation we obtain : F(kx,ky) = G(kx,ky) /(kx² + ky²) (1) if i take g(x,y) = sin (x) + sin(y) , (x,y) \in [0,2 \pi] i have immediately f(x,y) = g(x,y) which is what i am trying to obtain with the fft : i compute G from g with a forward Fourier transform From this i can compute the Fourier transform of f with (1). Finally, i compute f with the backward Fourier transform (without forgetting to normalize by 1/(nx*ny)). In practice, the results are pretty bad? (For instance, the amplitude for N = 256 is twice the amplitude obtained with N = 512) Even worse, if i try g(x,y) = sin(x)*sin(y) , the curve has not even the same form of the solution. (note that i must change the equation; i divide by two the laplacian in this case : (1) becomes F(kx,ky) = 2*G(kx,ky)/(kx²+ky²) Here is the code: /* * fftw test -- double precision */ #include <iostream> #include <stdio.h> #include <stdlib.h> #include <math.h> #include <fftw3.h> using namespace std; int main() { int N = 128; int i, j ; double pi = 3.14159265359; double *X, *Y ; X = (double*) malloc(N*sizeof(double)); Y = (double*) malloc(N*sizeof(double)); fftw_complex *out1, *in2, *out2, *in1; fftw_plan p1, p2; double L = 2.*pi; double dx = L/((N - 1)*1.0); in1 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); out2 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); out1 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); in2 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); p1 = fftw_plan_dft_2d(N, N, in1, out1, FFTW_FORWARD,FFTW_MEASURE ); p2 = fftw_plan_dft_2d(N, N, in2, out2, FFTW_BACKWARD,FFTW_MEASURE); for(i = 0; i < N; i++){ X[i] = -pi + (i*1.0)*2.*pi/((N - 1)*1.0) ; for(j = 0; j < N; j++){ Y[j] = -pi + (j*1.0)*2.*pi/((N - 1)*1.0) ; in1[i*N + j][0] = sin(X[i]) + sin(Y[j]) ; // row major ordering //in1[i*N + j][0] = sin(X[i]) * sin(Y[j]) ; // 2nd test case in1[i*N + j][1] = 0 ; } } fftw_execute(p1); // FFT forward for ( i = 0; i < N; i++){ // f = g / ( kx² + ky² ) for( j = 0; j < N; j++){ in2[i*N + j][0] = out1[i*N + j][0]/ (i*i+j*j+1e-16); in2[i*N + j][1] = out1[i*N + j][1]/ (i*i+j*j+1e-16); //in2[i*N + j][0] = 2*out1[i*N + j][0]/ (i*i+j*j+1e-16); // 2nd test case //in2[i*N + j][1] = 2*out1[i*N + j][1]/ (i*i+j*j+1e-16); } } fftw_execute(p2); //FFT backward // checking the results computed double erl1 = 0.; for ( i = 0; i < N; i++) { for( j = 0; j < N; j++){ erl1 += fabs( in1[i*N + j][0] - out2[i*N + j][0]/N/N )*dx*dx; cout<< i <<" "<< j<<" "<< sin(X[i])+sin(Y[j])<<" "<< out2[i*N+j][0]/N/N <<" "<< endl; // > output } } cout<< erl1 << endl ; // L1 error fftw_destroy_plan(p1); fftw_destroy_plan(p2); fftw_free(out1); fftw_free(out2); fftw_free(in1); fftw_free(in2); return 0; } I can't find any (more) mistakes in my code (i installed the fftw3 library last week) and i don't see a problem with the maths either but i don't think it's the fft's fault. Hence my predicament. I am all out of ideas and all out of google as well. Any help solving this puzzle would be greatly appreciated. note : compiling : g++ test.cpp -lfftw3 -lm executing : ./a.out output and i use gnuplot in order to plot the curves : (in gnuplot ) splot "output" u 1:2:4 ( for the computed solution )

    Read the article

  • Learn mp3 format and audio signal processing

    - by Shankhoneer Chakrovarty
    I am trying to learn the following things: How mp3 file looks like internally? I found this: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html but it seems old. Is there any recent changes to the format? I couldnt find any. How to open a mp3 file in java and look for bytes? I tried using audiostream but I am getting a lot of zeros and signed short integers which nowhere resemble the header/body format as mentioned in the above link. Am I wrong in interpreting the bytes? How to get amplitude, frequency and pitch of a mp3 file? No idea. Can you please suggest some book or tutorial? Can you please help me in getting the solution for the above questions? I am sorry if some questions appear to be naive, I am a just begun to learn mp3. Thanks

    Read the article

1 2  | Next Page >