Search Results

Search found 3993 results on 160 pages for 'audio'.

Page 62/160 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Correct way to Convert 16bit PCM Wave data to float

    - by fredley
    I have a wave file in 16bit PCM form. I've got the raw data in a byte[] and a method for extracting samples, and I need them in float format, i.e. a float[] to do a Fourier Transform. Here's my code, does this look right? I'm working on Android so javax.sound.sampled etc. is not available. private static short getSample(byte[] buffer, int position) { return (short) (((buffer[position + 1] & 0xff) << 8) | (buffer[position] & 0xff)); } ... float[] samples = new float[samplesLength]; for (int i = 0;i<input.length/2;i+=2){ samples[i/2] = (float)getSample(input,i) / (float)Short.MAX_VALUE; }

    Read the article

  • Why do calls to waveOutGetPosition hang?

    - by MusiGenesis
    I'm using the winmm.dll API method waveOutGetPosition to get the current position of the playback of a WAV file. Sometimes this works as expected for me, but eventually one of the calls never returns and my application locks up. I found this thread with a few users who have experienced the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/c6a1e80e-4a18-47e7-af11-56a89f638ad7 but no solution. Has anyone run into this problem before?

    Read the article

  • What does LAME text does in MP3 file?

    - by Dims
    I see here http://en.wikipedia.org/wiki/MP3 that MP3 file consists of MP3 headers interchanged with MP3 data. MP3 header consist of few bytes. But here is my MP3 file dump with ID3 tag cut. Header is highlighted with blue. You can see that "LAME3.96" text is highlighted with green. What does it does there? Is this a part of MP3 elementary stream? Or this is the part of some headers I didn't tag?

    Read the article

  • What exactly does raw microphone data represent?

    - by esperantist
    I'm using PyAudio, a PortAudio wrapper for Python. I'm getting data from a microphone. Data which is represented by a continuous stream of bytes divided into chunks (of a size determined by me). I've tried to plot the signal, assuming the bytes represent the current signal amplitude, but I get an interesting image that I can't easily describe. ^^ It seems to be composed of two waves, one shifted from the other. What exactly do the particular bytes represent, and how does this change when I'm recording only one channel, instead of two? Any explanations, suggestions, code snippets, anything, very welcome! (I'm new at this.) Thanks!

    Read the article

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • byte[] to wav file

    - by John
    Hi, It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels. Thanks for your help.

    Read the article

  • How to play extracted wave file byte array in C#?

    - by user261924
    At the moment i have managed to separate the left and right channel of a WAVE file and have included the header in a byte[] array. My next step is to be about to play both channels. How can this be done? Here is a code snippet: byte[] song_left = new byte[fa.Length]; byte[] song_right = new byte[fa.Length]; int p = 0; for (int c = 0; c < 43; c++) { song_left[p] = header[c]; p++; } int q = 0; for (s = startByte; s < length; s = s + 3) { song_left[s] = sLeft[q]; q++; s++; song_left[s] = sLeft[q]; q++; } p = 0; for (int c = 0; c < 43; c++) { song_right[p] = header[c]; p++; } This part is reading the header and data from both the right and light channel and saving it to array sLeft[] and sRight[]. This part is working perfectly. Once I obtained the byte arrays, I did the following: System.IO.File.WriteAllBytes("c:\\left.wav", song_left); System.IO.File.WriteAllBytes("c:\\right.wav", song_right); Added a button to play the saved wave file: private void button2_Click(object sender, EventArgs e) { spWave = new SoundPlayer("c:\\left.wav"); spWave.Play(); } Once I hit the play button, this error appers: An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll Additional information: The wave header is corrupt. Any ideas?

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Simple sound effect loop using AudioToolKit

    - by Typeoneerror
    I've created a few sounds for use in my game. I can play them at certain events without issue: // create sounds CFBundleRef mainBundle; mainBundle = CFBundleGetMainBundle(); _soundFileShake = CFBundleCopyResourceURL(mainBundle, CFSTR("shake"), CFSTR("wav"), NULL); AudioServicesCreateSystemSoundID(_soundFileShake, &_soundIdShake); // later... AudioServicesPlaySystemSound(_soundIdShake); The game has a mechanism which allows you to shake the device to activate some functionality. I've got the shaking code done so I get get a "shaking started" and "shaking ended" message to my game. What I need to have happen is start playing "shave.wav" when shaking starts and loop it until it stops. Is there a way to do this with AudioToolbox/AudioServices? How could I do this if not?

    Read the article

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • Is there an easy way to stream a m3u in iPhone?

    - by marty
    I can have a UIWebView with the .m3u file opened, which will go to the webview with a play button displayed, and that automatically goes to the quicktime player and starts playing the stream. But when I press the done button, it goes back to the UIWebView with a little play button in the middle, and from there you can go back to the previous screen (it was selected from a tableview). So I just want it to automatically load the quicktime player in the view. How can I do that?

    Read the article

  • Java M4A atom tagging free space issue

    - by Brett
    Hey, I've been trying to be able to read and write iTunes style M4A atoms and while I've successfully done the reading part, I've come to a bit of a halt in regards to the free space atoms. I figured that I should be able edit and shift the padding around to accommodate writing an atom with more data than it originally had. I've been stuck on this for about a day now, and I've been trying to figure out how to determine the closest free space atom with enough size to accommodate the new data. so far I have: private freeAtom acquireFreeSpaceAtom( long position ) { long atomStart = Long.MAX_VALUE; freeAtom atom = null; for( freeAtom a : freeSpace ) { if( Math.abs( position - atomStart ) > Math.abs( position - a.getAtomStart() ) ) atomStart = ( atom = a ).getAtomStart(); } return atom; } That code only takes into account the closest free space atom and completely disregards the fact that it should be greater than or equal to a certain size, but I can't quite figure out how I should check for both closeness and size efficiently.

    Read the article

  • How to embed mp3 into .exe file & Play it?

    - by afriza
    I am used to embed WAV into .exe and Play it using PlaySound(). However, using this method causes the .exe to become pretty big. Is it possible to do the same with MP3 files and how to do it? I have taken a look at DirectShow but it seems to be able to play from files only? I am developing for Windows Mobile 6 Series

    Read the article

  • OpenAL device, buffer and context relationship

    - by Markus
    I'm trying to create an object oriented model to wrap OpenAL and have a little problem understanding the devices, buffers and contexts. From what I can see in the Programmer's Guide, there are multiple devices, each of which can have multiple contexts as well as multiple buffers. Each context has a listener, and the alListener*() functions all operate on the listener of the active context. (Meaning that I have to make another context active first if I wanted to change it's listener, if I got that right.) So far, so good. What irritates me though is that I need to pass a device to the alcCreateContext() function, but none to alGenBuffers(). How does this work then? When I open multiple devices, on which device are the buffers created? Are the buffers shared between all devices? What happens to the buffers if I close all open devices? (Or is there something I missed?)

    Read the article

  • TI-99 speech effect?

    - by kotlinski
    Hi, I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >