Search Results

Search found 271 results on 11 pages for 'pitch'.

Page 1/11 | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • Modify audio pitch of recorded clip (m4v)

    - by devcube
    I'm writing an app in which I'm trying to change the pitch of the audio when I'm recording a movie (.m4v). Or by modifying the audio pitch of the movie afterwards. I want the end result to be a movie (.m4v) that has the original length (i.e. same visual as original) but with modified sound pitch, e.g. a "chipmunk voice". A realtime conversion is to prefer if possible. I've read alot about changing audio pitch in iOS but most examples focus on playback, i.e. playing the sound with a different pitch. In my app I'm recording a movie (.m4v / AVFileTypeQuickTimeMovie) and saving it using standard AVAssetWriter. When saving the movie I have access to the following elements where I've tried to manipulate the audio (e.g. modify the pitch): audio buffer (CMSampleBufferRef) audio input writer (AVAssetWriterAudioInput) audio input writer options (e.g. AVNumberOfChannelsKey, AVSampleRateKey, AVChannelLayoutKey) asset writer (AVAssetWriter) I've tried to hook into the above objects to modify the audio pitch, but without success. I've also tried with Dirac as described here: Real Time Pitch Change In iPhone Using Dirac And OpenAL with AL_PITCH as described here: Piping output from OpenAL into a buffer And the "BASS" library from un4seen: Change Pitch/Tempo In Realtime I haven't found success with any of the above libs, most likely because I don't really know how to use them, and where to hook them into the audio saving code. There seems to be alot of librarys that have similar effects but focuses on playback or custom recording code. I want to manipulate the audio stream I've already got (AVAssetWriterAudioInput) or modify the saved movie clip (.m4v). I want the video to be unmodifed visually, i.e. played at the same speed. But I want the audio to go faster (like a chipmunk) or slower (like a ... monster? :)). Do you have any suggestions how I can modify the pitch in either real time (when recording the movie) or afterwards by converting the entire movie (.m4v file)? Should I look further into Dirac, OpenAL, SoundTouch, BASS or some other library? I want to be able to share the movie to others with modified audio, that's the reason I can't rely on modifying the pitch for playback only. Any help is appreciated, thanks!

    Read the article

  • Limit the amount a camera can pitch

    - by ChocoMan
    I'm having problems trying to limit the range my camera can pitch. Currently my camera can pitch around a model without restriction, but having a hard time trying to find the value of the degree/radian the camera is currently at after pitching. Here is what I got so far: // Moves camera with thumbstick Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); // Pitch Camera around model public void cameraPitch(float pitch) { pitchAngle = ModelLoad.camTarget - ModelLoad.CameraPos; axisPitch = Vector3.Cross(Vector3.Up, pitchAngle); // pitch constrained to model's orientation axisPitch.Normalize(); ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } I've tried restraining the Y-camera position of ModelLoad.CameraPos.Y, but doing so gave me some unwanted results.

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

  • Pitch detection and change java

    - by omegas27
    Hello, I'm french so I'm sorry if you have trouble to understand some of my sentences. Aniways, I saw in some topics that the pitch could be fetected thanks to the Fourier transform but I didn't really understand how to implement it. Moreover, I didn't find how to change the pitch of a wav file and if possibl ,a mp3 file I am listening to music using javaSound for the wav and JLayer for the mp3. Thanks

    Read the article

  • Pitch camera around model

    - by ChocoMan
    Currently, my camera rotates with my model's Y-Axis (yaw) perfectly. What I'm having trouble with is rotating the X-Axis (pitch) along with it. I've tried the same method for cameraYaw() in the form of cameraPitch(), while adjusting the axis to Vector.Right, but the camera wouldn't pitch at all in accordance to the Y-Axes of the controller. Is there a way similar to this to get the same effect for pitching the camera around the model? // Rotates model on its own Y-axis public void modelRotMovement(GamePadState pController) { Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around model public void cameraYaw(Vector3 axis, float yaw, float pitch) { Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axis, yaw)) + ModelLoad.camTarget; } public void updateCamera() { cameraYaw(Vector3.Up, Yaw); }

    Read the article

  • Graphing the pitch (frequency) of a sound

    - by Coronatus
    I want to plot the pitch of a sound into a graph. Currently I can plot the amplitude. The graph below is created by the data returned by getUnscaledAmplitude(): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new BufferedInputStream(new FileInputStream(file))); byte[] bytes = new byte[(int) (audioInputStream.getFrameLength()) * (audioInputStream.getFormat().getFrameSize())]; audioInputStream.read(bytes); // Get amplitude values for each audio channel in an array. graphData = type.getUnscaledAmplitude(bytes, this); public int[][] getUnscaledAmplitude(byte[] eightBitByteArray, AudioInfo audioInfo) { int[][] toReturn = new int[audioInfo.getNumberOfChannels()][eightBitByteArray.length / (2 * audioInfo. getNumberOfChannels())]; int index = 0; for (int audioByte = 0; audioByte < eightBitByteArray.length;) { for (int channel = 0; channel < audioInfo.getNumberOfChannels(); channel++) { // Do the byte to sample conversion. int low = (int) eightBitByteArray[audioByte]; audioByte++; int high = (int) eightBitByteArray[audioByte]; audioByte++; int sample = (high << 8) + (low & 0x00ff); if (sample < audioInfo.sampleMin) { audioInfo.sampleMin = sample; } else if (sample > audioInfo.sampleMax) { audioInfo.sampleMax = sample; } toReturn[channel][index] = sample; } index++; } return toReturn; } But I need to show the audio's pitch, not amplitude. Fast Fourier transform appears to get the pitch, but it needs to know more variables than the raw bytes I have, and is very complex and mathematical. Is there a way I can do this?

    Read the article

  • MIDI - Continous control of pitch outside the standard pitch bend range?

    - by user1423893
    After I send a note on message I can control the pitch of a note within a ±2 semitone range using the pitch bend channel command. How can I continuously update a pitch of a note outside of the normal pitch bend range without retriggering the note (i.e. sending another note on message with the new pitch?) In other words the current note is still sounding after a note on message and its envelope has not reached the end of its release stage. I would like to change the pitch outside the pitch bend range, preferably anywhere within the audible frequency range.

    Read the article

  • C# Audio - How to time stretch (different tempo, same pitch)

    - by heath
    I'm trying to make a winform app in C# (VS2008) that can load an mp3 (other formats would be nice, but mp3 at a minimum) and be able to adjust the playback speed (tempo) without affecting pitch. I really don't need any other audio effects. I tried using DirectShow but that doesn't seem to offer time stretch capabilities. I was able to incorporate irrklang but that does not seem to have the time stretch capability either. So now I've moved on to SoundTouch. That certainly has the capabilities but I'm very unclear on how to implement in C#. After a few days of this, about all I've accomplished is using DLLImport on the SoundTouch DLL and am able to successfully retrieve a version number. At this point, I'm not even sure if I can do what I'm trying to do with SoundTouch. Can anyone offer some guidance either on how to implement SoundTouch or a different library with the capabilities that I'm looking for? Thank you.

    Read the article

  • RockBand-like voice app for PC/OSX / Real time pitch display software

    - by Sai Emrys
    I played Rock Band 2 for the first time a little while ago (at Notacon). One thing I enjoyed about it was getting real-time feedback about my singing. I think it'd be neat to have something like that to run alongside my usual music, so that I can sing to random stuff in my music collection and know when I'm hitting the notes. Is there something like this for PC - ideally for OSX, and ideally that can just operate on arbitrary songs? I don't really care if it's game-like (though that's neat too); I just want it for the singing feedback. And I have no need for pitch correction - ideally what I'd see is just the pitches of the notes in the music and (on the same scale, differently displayed) of the live microphone. I tried to STFW but got no salient hits. :-/ Thanks!

    Read the article

  • Autocorrelation returns random results with mic input (using a high pass filter)

    - by Niall
    Hello, Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation. Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050. I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated! double* doHighPassFilter(short *buffer) { // Do FFT: int bufferLength = 1024; float *real = malloc(bufferLength*sizeof(float)); float *real2 = malloc(bufferLength*sizeof(float)); for(int x=0;x<bufferLength;x++) { real[x] = buffer[x]; } fft(real, bufferLength); for(int x=0;x<bufferLength;x+=2) { real2[x] = real[x]; } for (int i=0; i < 30; i++) //Set freqs lower than 30hz to zero to attenuate the low frequencies real2[i] = 0; // Do inverse FFT: inversefft(real2,bufferLength); double* real3 = (double*)real2; return real3; } double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; } // These work by shifting the signal until it seems to correlate with itself. // In other words if the signal looks very similar to (signal shifted 200 data) than the fundamental period is probably 200 data // Note that the algorithm only works well when there's only one prominent fundamental. // This could be optimized by looking at the rate of change to determine a maximum without testing all periods. double detectPitchCalculation(double* data, double minHz, double maxHz, int nCandidates, int nResolution) { //-------------------------1-------------------------// // note that higher frequency means lower period int nLowPeriodInSamples = hzToPeriodInSamples(maxHz, 2048); int nHiPeriodInSamples = hzToPeriodInSamples(minHz, 2048); if (nHiPeriodInSamples <= nLowPeriodInSamples) printf("Bad range for pitch detection."); if (1024 < nHiPeriodInSamples) printf("Not enough data."); double *results = new double[nHiPeriodInSamples - nLowPeriodInSamples]; //-------------------------2-------------------------// for (int period = nLowPeriodInSamples; period < nHiPeriodInSamples; period += nResolution) { double sum = 0; // for each sample, find correlation. (If they are far apart, small) for (int i = 0; i < 1024 - period; i++) sum += data[i] * data[i + period]; double mean = sum / 1024.0; results[period - nLowPeriodInSamples] = mean; } //-------------------------3-------------------------// // find the best indices int *bestIndices = findBestCandidates(nCandidates, results, nHiPeriodInSamples - nLowPeriodInSamples - 1); //note findBestCandidates modifies parameter // convert back to Hz double *res = new double[nCandidates]; for (int i=0; i < nCandidates;i++) res[i] = periodInSamplesToHz(bestIndices[i]+nLowPeriodInSamples, 2048); double pitch2 = res[0]; free(res); free(results); return pitch2; } /// Finds n "best" values from an array. Returns the indices of the best parts. /// (One way to do this would be to sort the array, but that could take too long. /// Warning: Changes the contents of the array!!! Do not use result array afterwards. int* findBestCandidates(int n, double* inputs,int length) { //int length = inputs.Length; if (length < n) printf("Length of inputs is not long enough."); int *res = new int[n]; double minValue = 0; for (int c = 0; c < n; c++) { // find the highest. double fBestValue = minValue; int nBestIndex = -1; for (int i = 0; i < length; i++) { if (inputs[i] > fBestValue) { nBestIndex = i; fBestValue = inputs[i]; } } // record this highest value res[c] = nBestIndex; // now blank out that index. if(nBestIndex!=-1) inputs[nBestIndex] = minValue; } return res; } int hzToPeriodInSamples(double hz, int sampleRate) { return (int)(1 / (hz / (double)sampleRate)); } double periodInSamplesToHz(int period, int sampleRate) { return 1 / (period / (double)sampleRate); } Thanks, Niall. Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.

    Read the article

  • C# - .WAV Playback Randomly High Pitch

    - by Nate Shoffner
    For some reason, when a WAV file is played back using the snippet below, it randomly plays back screwy, like a high pitch noise. It doesn't happen all the time, just randomly. It seems to happen more often when it is played back more frequently. The WAV properties are below along with the code snippet I am using. WAV Properties: Bit Rate - 750kbps Audio Sample Size - 16 bit Channels - 1 (mono) Audio Sample Rate - 44kHz Audio Format - PCM Snippet: System.Media.SoundPlayer myPlayer = new System.Media.SoundPlayer(Captcha.Properties.Resources.sound1); myPlayer.Play(); Is this because of the way I am playing the file or the file itself? Thank you.

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • Ray picking - get direction from pitch and yaw

    - by Isaac Waller
    I am attempting to cast a ray from the center of the screen and check for collisions with objects. When rendering, I use these calls to set up the camera: GL11.glRotated(mPitch, 1, 0, 0); GL11.glRotated(mYaw, 0, 1, 0); GL11.glTranslated(mPositionX, mPositionY, mPositionZ); I am having trouble creating the ray, however. This is the code I have so far: ray.origin = new Vector(mPositionX, mPositionY, mPositionZ); ray.direction = new Vector(?, ?, ?); My question is: what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll? Any help would be much appreciated!

    Read the article

  • Ubuntu Audio Pitch Shifting filter

    - by user777305
    I'm currently developing a video player software which intended to be an embeedded player. I'm using Java with VLCJ library for the video player. What i'm looking is a way, something to transform the audio output to make the output sound as oldman or a kid (pitch shifting, i guess is the name). VLC have this when enabling time stretch, but the video play speed is affected (slower to get oldman sound, but fast-forward-alike to get kid voice effect. Is there any solutions for this? I don't find this feature on VLC(J), so i think what i need is the audio output of the ubuntu itself (Ubuntu 12.04) to do this job. something like filtering audio output system wide. Any software or setting to do this? it also need to be controllable via command line to provide realtime effect changing. Thanks in advance.

    Read the article

  • Einmal nach oben, bitte. Mit den Oracle Communities.

    - by A&C Redaktion
    Stellen Sie sich vor, Sie treffen im Fahrstuhl einen bekannten und vertrauten Geschäftspartner. Theoretisch bleibt Ihnen nur die Fahrtzeit vom Erdgeschoss in den dritten Stock, um eine wichtige Angelegenheit zu erklären. Das nennt man „Elevator Pitch". Anwar Behan und Bernhard Adelmann haben dieses beliebte Szenario für Sie nachgestellt und erklären so auf unterhaltsame Weise, welchen Mehrwert die Oracle Communities für den Oracle Partner haben. Mit den Oracle Communities geht es für Partner nach oben und hier zum Video.

    Read the article

  • WebGL First Person Camera - Matrix issues

    - by Ryan Welsh
    I have been trying to make a WebGL FPS camera.I have all the inputs working correctly (I think) but when it comes to applying the position and rotation data to the view matrix I am a little lost. The results can be viewed here http://thistlestaffing.net/masters/camera/index.html and the code here var camera = { yaw: 0.0, pitch: 0.0, moveVelocity: 1.0, position: [0.0, 0.0, -70.0] }; var viewMatrix = mat4.create(); var rotSpeed = 0.1; camera.init = function(canvas){ var ratio = canvas.clientWidth / canvas.clientHeight; var left = -1; var right = 1; var bottom = -1.0; var top = 1.0; var near = 1.0; var far = 1000.0; mat4.frustum(projectionMatrix, left, right, bottom, top, near, far); viewMatrix = mat4.create(); mat4.rotateY(viewMatrix, viewMatrix, camera.yaw); mat4.rotateX(viewMatrix, viewMatrix, camera.pitch); mat4.translate(viewMatrix, viewMatrix, camera.position); } camera.update = function(){ viewMatrix = mat4.create(); mat4.rotateY(viewMatrix, viewMatrix, camera.yaw); mat4.rotateX(viewMatrix, viewMatrix, camera.pitch); mat4.translate(viewMatrix, viewMatrix, camera.position); } //prevent camera pitch from going above 90 and reset yaw when it goes over 360 camera.lockCamera = function(){ if(camera.pitch > 90.0){ camera.pitch = 90; } if(camera.pitch < -90){ camera.pitch = -90; } if(camera.yaw <0.0){ camera.yaw = camera.yaw + 360; } if(camera.yaw >360.0){ camera.yaw = camera.yaw - 0.0; } } camera.translateCamera = function(distance, direction){ //calculate where we are looking at in radians and add the direction we want to go in ie WASD keys var radian = glMatrix.toRadian(camera.yaw + direction); //console.log(camera.position[3], radian, distance, direction); //calc X coord camera.position[0] = camera.position[0] - Math.sin(radian) * distance; //calc Z coord camera.position[2] = camera.position [2] - Math.cos(radian) * distance; console.log(camera.position [2] - (Math.cos(radian) * distance)); } camera.rotateUp = function(distance, direction){ var radian = glMatrix.toRadian(camera.pitch + direction); //calc Y coord camera.position[1] = camera.position[1] + Math.sin(radian) * distance; } camera.moveForward = function(){ if(camera.pitch!=90 && camera.pitch!=-90){ camera.translateCamera(-camera.moveVelocity, 0.0); } camera.rotateUp(camera.moveVelocity, 0.0); } camera.moveBack = function(){ if(camera.pitch!=90 && camera.pitch!=-90){ camera.translateCamera(-camera.moveVelocity, 180.0); } camera.rotateUp(camera.moveVelocity, 180.0); } camera.moveLeft = function(){ camera.translateCamera(-camera.moveVelocity, 270.0); } camera.moveRight = function(){ camera.translateCamera(-camera.moveVelocity, 90.0); } camera.lookUp = function(){ camera.pitch = camera.pitch + rotSpeed; camera.lockCamera(); } camera.lookDown = function(){ camera.pitch = camera.pitch - rotSpeed; camera.lockCamera(); } camera.lookLeft = function(){ camera.yaw= camera.yaw - rotSpeed; camera.lockCamera(); } camera.lookRight = function(){ camera.yaw = camera.yaw + rotSpeed; camera.lockCamera(); } . If there is no problem with my camera then I am doing some matrix calculations within my draw function where a problem might be. //position cube 1 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.translate(worldMatrix, worldMatrix, [-20.0, 0.0, -30.0]); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); setShaderMatrix(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.vertexAttribPointer(shaderProgram.attPosition, 3, gl.FLOAT, false, 8*4,0); gl.vertexAttribPointer(shaderProgram.attTexCoord, 2, gl.FLOAT, false, 8*4, 3*4); gl.vertexAttribPointer(shaderProgram.attNormal, 3, gl.FLOAT, false, 8*4, 5*4); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.uniform1i(shaderProgram.uniSampler, 0); gl.useProgram(shaderProgram); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); //position cube 2 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); mat4.translate(worldMatrix, worldMatrix, [40.0, 0.0, -30.0]); setShaderMatrix(); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); //position cube 3 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); mat4.translate(worldMatrix, worldMatrix, [20.0, 0.0, -100.0]); setShaderMatrix(); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); camera.update();

    Read the article

  • How to rotate 3D axis(XYZ) using Yaw,Pitch,Roll angles in Opengl

    - by user3639338
    I am working Pose estimation with capturing from camera with Opencv. Now I had three angle(Yaw,Pitch,Roll) from each frame(Head) using my code.How to rotate 3D axis(XYZ) those three angle using opengl ? I draw 3D axis using opengl. I have Problem with rotate this axis for returning each frame(Head) using VideoCapture camera input from my code.I cant rotate continuously using returning three angle my code.

    Read the article

  • Change in pitch of voice

    - by user340007
    Hi, I am creating an iPhone application in which when I make a call to anyone I should be able to change the pitch of my call voice in real time. So for that which framework or any third party library should I use? Thanks, Sunil.

    Read the article

  • How to convert pitch and yaw to x, y, z rotations?

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( So my quesiton is: how do you calculate the X, Y, and Z rotations for an object in terms of pitch and yaw? Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • Conceal packet loss in PCM stream

    - by ZeroDefect
    I am looking to use 'Packet Loss Concealment' to conceal lost PCM frames in an audio stream. Unfortunately, I cannot find a library that is accessible without all the licensing restrictions and code bloat (...up for some suggestions though). I have located some GPL code written by Steve Underwood for the Asterisk project which implements PLC. There are several limitations; although, as Steve suggests in his code, his algorithm can be applied to different streams with a bit of work. Currently, the code works with 8kHz 16-bit signed mono streams. Variations of the code can be found through a simple search of Google Code Search. My hope is that I can adapt the code to work with other streams. Initially, the goal is to adjust the algorithm for 8+ kHz, 16-bit signed, multichannel audio (all in a C++ environment). Eventually, I'm looking to make the code available under the GPL license in hopes that it could be of benefit to others... Attached is the code below with my efforts. The code includes a main function that will "drop" a number of frames with a given probability. Unfortunately, the code does not quite work as expected. I'm receiving EXC_BAD_ACCESS when running in gdb, but I don't get a trace from gdb when using 'bt' command. Clearly, I'm trampimg on memory some where but not sure exactly where. When I comment out the *amdf_pitch* function, the code runs without crashing... int main (int argc, char *argv[]) { std::ifstream fin("C:\\cc32kHz.pcm"); if(!fin.is_open()) { std::cout << "Failed to open input file" << std::endl; return 1; } std::ofstream fout_repaired("C:\\cc32kHz_repaired.pcm"); if(!fout_repaired.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } std::ofstream fout_lossy("C:\\cc32kHz_lossy.pcm"); if(!fout_lossy.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } audio::PcmConcealer Concealer; Concealer.Init(1, 16, 32000); //Generate random numbers; srand( time(NULL) ); int value = 0; int probability = 5; while(!fin.eof()) { char arr[2]; fin.read(arr, 2); //Generate's random number; value = rand() % 100 + 1; if(value <= probability) { char blank[2] = {0x00, 0x00}; fout_lossy.write(blank, 2); //Fill in data; Concealer.Fill((int16_t *)blank, 1); fout_repaired.write(blank, 2); } else { //Write data to file; fout_repaired.write(arr, 2); fout_lossy.write(arr, 2); Concealer.Receive((int16_t *)arr, 1); } } fin.close(); fout_repaired.close(); fout_lossy.close(); return 0; } PcmConcealer.hpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #ifndef __PCMCONCEALER_HPP__ #define __PCMCONCEALER_HPP__ /** 1. What does it do? The packet loss concealment module provides a suitable synthetic fill-in signal, to minimise the audible effect of lost packets in VoIP applications. It is not tied to any particular codec, and could be used with almost any codec which does not specify its own procedure for packet loss concealment. Where a codec specific concealment procedure exists, the algorithm is usually built around knowledge of the characteristics of the particular codec. It will, therefore, generally give better results for that particular codec than this generic concealer will. 2. How does it work? While good packets are being received, the plc_rx() routine keeps a record of the trailing section of the known speech signal. If a packet is missed, plc_fillin() is called to produce a synthetic replacement for the real speech signal. The average mean difference function (AMDF) is applied to the last known good signal, to determine its effective pitch. Based on this, the last pitch period of signal is saved. Essentially, this cycle of speech will be repeated over and over until the real speech resumes. However, several refinements are needed to obtain smooth pleasant sounding results. - The two ends of the stored cycle of speech will not always fit together smoothly. This can cause roughness, or even clicks, at the joins between cycles. To soften this, the 1/4 pitch period of real speech preceeding the cycle to be repeated is blended with the last 1/4 pitch period of the cycle to be repeated, using an overlap-add (OLA) technique (i.e. in total, the last 5/4 pitch periods of real speech are used). - The start of the synthetic speech will not always fit together smoothly with the tail of real speech passed on before the erasure was identified. Ideally, we would like to modify the last 1/4 pitch period of the real speech, to blend it into the synthetic speech. However, it is too late for that. We could have delayed the real speech a little, but that would require more buffer manipulation, and hurt the efficiency of the no-lost-packets case (which we hope is the dominant case). Instead we use a degenerate form of OLA to modify the start of the synthetic data. The last 1/4 pitch period of real speech is time reversed, and OLA is used to blend it with the first 1/4 pitch period of synthetic speech. The result seems quite acceptable. - As we progress into the erasure, the chances of the synthetic signal being anything like correct steadily fall. Therefore, the volume of the synthesized signal is made to decay linearly, such that after 50ms of missing audio it is reduced to silence. - When real speech resumes, an extra 1/4 pitch period of sythetic speech is blended with the start of the real speech. If the erasure is small, this smoothes the transition. If the erasure is long, and the synthetic signal has faded to zero, the blending softens the start up of the real signal, avoiding a kind of "click" or "pop" effect that might occur with a sudden onset. 3. How do I use it? Before audio is processed, call plc_init() to create an instance of the packet loss concealer. For each received audio packet that is acceptable (i.e. not including those being dropped for being too late) call plc_rx() to record the content of the packet. Note this may modify the packet a little after a period of packet loss, to blend real synthetic data smoothly. When a real packet is not available in time, call plc_fillin() to create a sythetic substitute. That's it! */ /*! Minimum allowed pitch (66 Hz) */ #define PLC_PITCH_MIN(SAMPLE_RATE) ((double)(SAMPLE_RATE) / 66.6) /*! Maximum allowed pitch (200 Hz) */ #define PLC_PITCH_MAX(SAMPLE_RATE) ((SAMPLE_RATE) / 200) /*! Maximum pitch OLA window */ //#define PLC_PITCH_OVERLAP_MAX(SAMPLE_RATE) ((PLC_PITCH_MIN(SAMPLE_RATE)) >> 2) /*! The length over which the AMDF function looks for similarity (20 ms) */ #define CORRELATION_SPAN(SAMPLE_RATE) ((20 * (SAMPLE_RATE)) / 1000) /*! History buffer length. The buffer must also be at leat 1.25 times PLC_PITCH_MIN, but that is much smaller than the buffer needs to be for the pitch assessment. */ //#define PLC_HISTORY_LEN(SAMPLE_RATE) ((CORRELATION_SPAN(SAMPLE_RATE)) + (PLC_PITCH_MIN(SAMPLE_RATE))) namespace audio { typedef struct { /*! Consecutive erased samples */ int missing_samples; /*! Current offset into pitch period */ int pitch_offset; /*! Pitch estimate */ int pitch; /*! Buffer for a cycle of speech */ float *pitchbuf;//[PLC_PITCH_MIN]; /*! History buffer */ short *history;//[PLC_HISTORY_LEN]; /*! Current pointer into the history buffer */ int buf_ptr; } plc_state_t; class PcmConcealer { public: PcmConcealer(); ~PcmConcealer(); void Init(int channels, int bit_depth, int sample_rate); //Process a block of received audio samples. int Receive(short amp[], int frames); //Fill-in a block of missing audio samples. int Fill(short amp[], int frames); void Destroy(); private: int amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames); void save_history(plc_state_t *s, short *buf, int channel_index, int frames); void normalise_history(plc_state_t *s); /** Holds the states of each of the channels **/ std::vector< plc_state_t * > ChannelStates; int plc_pitch_min; int plc_pitch_max; int plc_pitch_overlap_max; int correlation_span; int plc_history_len; int channel_count; int sample_rate; bool Initialized; }; } #endif PcmConcealer.cpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #include "audio/PcmConcealer.hpp" /* We do a straight line fade to zero volume in 50ms when we are filling in for missing data. */ #define ATTENUATION_INCREMENT 0.0025 /* Attenuation per sample */ #if !defined(INT16_MAX) #define INT16_MAX (32767) #define INT16_MIN (-32767-1) #endif #ifdef WIN32 inline double rint(double x) { return floor(x + 0.5); } #endif inline short fsaturate(double damp) { if (damp > 32767.0) return INT16_MAX; if (damp < -32768.0) return INT16_MIN; return (short)rint(damp); } namespace audio { PcmConcealer::PcmConcealer() : Initialized(false) { } PcmConcealer::~PcmConcealer() { Destroy(); } void PcmConcealer::Init(int channels, int bit_depth, int sample_rate) { if(Initialized) return; if(channels <= 0 || bit_depth != 16) return; Initialized = true; channel_count = channels; this->sample_rate = sample_rate; ////////////// double min = PLC_PITCH_MIN(sample_rate); int imin = (int)min; double max = PLC_PITCH_MAX(sample_rate); int imax = (int)max; plc_pitch_min = imin; plc_pitch_max = imax; plc_pitch_overlap_max = (plc_pitch_min >> 2); correlation_span = CORRELATION_SPAN(sample_rate); plc_history_len = correlation_span + plc_pitch_min; ////////////// for(int i = 0; i < channel_count; i ++) { plc_state_t *t = new plc_state_t; memset(t, 0, sizeof(plc_state_t)); t->pitchbuf = new float[plc_pitch_min]; t->history = new short[plc_history_len]; ChannelStates.push_back(t); } } void PcmConcealer::Destroy() { if(!Initialized) return; while(ChannelStates.size()) { plc_state_t *s = ChannelStates.at(0); if(s) { if(s->history) delete s->history; if(s->pitchbuf) delete s->pitchbuf; memset(s, 0, sizeof(plc_state_t)); delete s; } ChannelStates.erase(ChannelStates.begin()); } ChannelStates.clear(); Initialized = false; } //Process a block of received audio samples. int PcmConcealer::Receive(short amp[], int frames) { if(!Initialized) return 0; int j = 0; for(int k = 0; k < ChannelStates.size(); k++) { int i; int overlap_len; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples) { /* Although we have a real signal, we need to smooth it to fit well with the synthetic signal we used for the previous block */ /* The start of the real data is overlapped with the next 1/4 cycle of the synthetic data. */ pitch_overlap = s->pitch >> 2; if (pitch_overlap > frames) pitch_overlap = frames; gain = 1.0 - s->missing_samples * ATTENUATION_INCREMENT; if (gain < 0.0) gain = 0.0; new_step = 1.0/pitch_overlap; old_step = new_step*gain; new_weight = new_step; old_weight = (1.0 - new_step)*gain; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->pitchbuf[s->pitch_offset] + new_weight * amp[index]); if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->missing_samples = 0; } save_history(s, amp, j, frames); j++; } return frames; } //Fill-in a block of missing audio samples. int PcmConcealer::Fill(short amp[], int frames) { if(!Initialized) return 0; int j =0; for(int k = 0; k < ChannelStates.size(); k++) { short *tmp = new short[plc_pitch_overlap_max]; int i; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; short *orig_amp; int orig_len; orig_amp = amp; orig_len = frames; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples == 0) { // As the gap in real speech starts we need to assess the last known pitch, //and prepare the synthetic data we will use for fill-in normalise_history(s); s->pitch = amdf_pitch(plc_pitch_min, plc_pitch_max, s->history + plc_history_len - correlation_span - plc_pitch_min, j, correlation_span); // We overlap a 1/4 wavelength pitch_overlap = s->pitch >> 2; // Cook up a single cycle of pitch, using a single of the real signal with 1/4 //cycle OLA'ed to make the ends join up nicely // The first 3/4 of the cycle is a simple copy for (i = 0; i < s->pitch - pitch_overlap; i++) s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]; // The last 1/4 of the cycle is overlapped with the end of the previous cycle new_step = 1.0/pitch_overlap; new_weight = new_step; for ( ; i < s->pitch; i++) { s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]*(1.0 - new_weight) + s->history[plc_history_len - 2*s->pitch + i]*new_weight; new_weight += new_step; } // We should now be ready to fill in the gap with repeated, decaying cycles // of what is in pitchbuf // We need to OLA the first 1/4 wavelength of the synthetic data, to smooth // it into the previous real data. To avoid the need to introduce a delay // in the stream, reverse the last 1/4 wavelength, and OLA with that. gain = 1.0; new_step = 1.0/pitch_overlap; old_step = new_step; new_weight = new_step; old_weight = 1.0 - new_step; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->history[plc_history_len - 1 - i] + new_weight * s->pitchbuf[i]); new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->pitch_offset = i; } else { gain = 1.0 - s->missing_samples*ATTENUATION_INCREMENT; i = 0; } for ( ; gain > 0.0 && i < frames; i++) { int index = (i * channel_count) + j; amp[index] = s->pitchbuf[s->pitch_offset]*gain; gain -= ATTENUATION_INCREMENT; if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; } for ( ; i < frames; i++) { int index = (i * channel_count) + j; amp[i] = 0; } s->missing_samples += orig_len; save_history(s, amp, j, frames); delete [] tmp; j++; } return frames; } void PcmConcealer::save_history(plc_state_t *s, short *buf, int channel_index, int frames) { if (frames >= plc_history_len) { /* Just keep the last part of the new data, starting at the beginning of the buffer */ //memcpy(s->history, buf + len - plc_history_len, sizeof(short)*plc_history_len); int frames_to_copy = plc_history_len; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + frames - plc_history_len)) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = 0; return; } if (s->buf_ptr + frames > plc_history_len) { /* Wraps around - must break into two sections */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*(plc_history_len - s->buf_ptr)); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = plc_history_len - s->buf_ptr; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } frames -= (plc_history_len - s->buf_ptr); //memcpy(s->history, buf + (plc_history_len - s->buf_ptr), sizeof(short)*len); frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + (plc_history_len - s->buf_ptr))) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = frames; return; } /* Can use just one section */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*len); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } s->buf_ptr += frames; } void PcmConcealer::normalise_history(plc_state_t *s) { short *tmp = new short[plc_history_len]; if (s->buf_ptr == 0) return; memcpy(tmp, s->history, sizeof(short)*s->buf_ptr); memcpy(s->history, s->history + s->buf_ptr, sizeof(short)*(plc_history_len - s->buf_ptr)); memcpy(s->history + plc_history_len - s->buf_ptr, tmp, sizeof(short)*s->buf_ptr); s->buf_ptr = 0; delete [] tmp; } int PcmConcealer::amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames) { int i; int j; int acc; int min_acc; int pitch; pitch = min_pitch; min_acc = INT_MAX; for (i = max_pitch; i <= min_pitch; i++) { acc = 0; for (j = 0; j < frames; j++) { int index1 = (channel_count * (i+j)) + channel_index; int index2 = (channel_count * j) + channel_index; //std::cout << "Index 1: " << index1 << ", Index 2: " << index2 << std::endl; acc += abs(amp[index1] - amp[index2]); } if (acc < min_acc) { min_acc = acc; pitch = i; } } std::cout << "Pitch: " << pitch << std::endl; return pitch; } } P.S. - I must confess that digital audio is not my forte...

    Read the article

  • Complex sound handling (I.E. pitch change while looping)

    - by Matthew
    Hi everyone I've been meaning to learn Java for a while now (I usually keep myself in languages like C and Lua) but buying an android phone seems like an excellent time to start. now after going through the lovely set of tutorials and a while spent buried in source code I'm beginning to get the feel for it so what's my next step? well to dive in with a fully featured application with graphics, sound, sensor use, touch response and a full menu. hmm now there's a slight conundrum since i can continue to use cryptic references to my project or risk telling you what the application is but at the same time its going to make me look like a raving sci-fi nerd so bare with me for the brief... A semi-working sonic screwdriver (oh yes!) my grand idea was to make an animated screwdriver where sliding the controls up and down modulate the frequency and that frequency dictates the sensor data it returns. now I have a semi-working sound system but its pretty poor for what its designed to represent and I just wouldn't be happy producing a sub-par end product whether its my first or not. the problem : sound must begin looping when the user presses down on the control the sound must stop when the user releases the control when moving the control up or down the sound effect must change pitch accordingly if the user doesn't remove there finger before backing out of the application it must plate the casing of there device with gold (Easter egg ;P) now I'm aware of how monolithic the first 3 look and that's why I would really appreciate any help I can get. sorry for how bad this code looks but my general plan is to create the functional components then refine the code later, no good painting the walls if the roofs not finished. here's my user input, he set slide stuff is used in the graphics for the control @Override public boolean onTouchEvent(MotionEvent event) { //motion event for the screwdriver view if(event.getAction() == MotionEvent.ACTION_DOWN) { //make sure the users at least trying to touch the slider if (event.getY() > SonicSlideYTop && event.getY() < SonicSlideYBottom) { //power setup, im using 1.5 to help out the rate on soundpool since it likes 0.5 to 1.5 SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); //just goes into a method which sets a private variable in my sound pool class thing mSonicAudio.setPower(1, SonicPower); //this handles the slides graphics setSlideY ( (int) event.getY() ); @Override public boolean onTouchEvent(MotionEvent event) { //motion event for the screwdriver view if(event.getAction() == MotionEvent.ACTION_DOWN) { //make sure the users at least trying to touch the slider if (event.getY() > SonicSlideYTop && event.getY() < SonicSlideYBottom) { //power setup, im using 1.5 to help out the rate on soundpool since it likes 0.5 to 1.5 SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); //just goes into a method which sets a private variable in my sound pool class thing mSonicAudio.setPower(1, SonicPower); //this handles the slides graphics setSlideY ( (int) event.getY() ); //this is from my latest attempt at loop pitch change, look for this in my soundPool class mSonicAudio.startLoopedSound(); } } if(event.getAction() == MotionEvent.ACTION_MOVE) { if (event.getY() > SonicSlideYTop && event.getY() < SonicSlideYBottom) { SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); mSonicAudio.setPower(1, SonicPower); setSlideY ( (int) event.getY() ); } } if(event.getAction() == MotionEvent.ACTION_UP) { mSonicAudio.stopLoopedSound(); SonicPower = 1.5f - ((event.getY() - SonicSlideYTop) / SonicSlideLength); mSonicAudio.setPower(1, SonicPower); } return true; } and here's where those methods end up in my sound pool class its horribly messy but that's because I've been trying a ton of variants to get this to work, you will also notice that I begin to hard code the index, again I was trying to get the methods to work before making them work well. package com.mattster.sonicscrewdriver; import java.util.HashMap; import android.content.Context; import android.media.AudioManager; import android.media.SoundPool; public class SoundManager { private float mPowerLvl = 1f; private SoundPool mSoundPool; private HashMap mSoundPoolMap; private AudioManager mAudioManager; private Context mContext; private int streamVolume; private int LoopState; private long mLastTime; public SoundManager() { } public void initSounds(Context theContext) { mContext = theContext; mSoundPool = new SoundPool(2, AudioManager.STREAM_MUSIC, 0); mSoundPoolMap = new HashMap<Integer, Integer>(); mAudioManager = (AudioManager)mContext.getSystemService(Context.AUDIO_SERVICE); streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); } public void addSound(int index,int SoundID) { mSoundPoolMap.put(1, mSoundPool.load(mContext, SoundID, 1)); } public void playUpdate(int index) { if( LoopState == 1) { long now = System.currentTimeMillis(); if (now > mLastTime) { mSoundPool.play(mSoundPoolMap.get(1), streamVolume, streamVolume, 1, 0, mPowerLvl); mLastTime = System.currentTimeMillis() + 250; } } } public void stopLoopedSound() { LoopState = 0; mSoundPool.setVolume(mSoundPoolMap.get(1), 0, 0); mSoundPool.stop(mSoundPoolMap.get(1)); } public void startLoopedSound() { LoopState = 1; } public void setPower(int index, float mPower) { mPowerLvl = mPower; mSoundPool.setRate(mSoundPoolMap.get(1), mPowerLvl); } } ah ha! I almost forgot, that looks pretty ineffective but I omitted my thread which actuality updates it, nothing fancy it just calls : mSonicAudio.playUpdate(1); thanks in advance, Matthew

    Read the article

1 2 3 4 5 6 7 8 9 10 11  | Next Page >