Search Results

Search found 4165 results on 167 pages for 'pulse audio'.

Page 63/167 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • byte[] to wav file

    - by John
    Hi, It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels. Thanks for your help.

    Read the article

  • Why do calls to waveOutGetPosition hang?

    - by MusiGenesis
    I'm using the winmm.dll API method waveOutGetPosition to get the current position of the playback of a WAV file. Sometimes this works as expected for me, but eventually one of the calls never returns and my application locks up. I found this thread with a few users who have experienced the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/c6a1e80e-4a18-47e7-af11-56a89f638ad7 but no solution. Has anyone run into this problem before?

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • How to play extracted wave file byte array in C#?

    - by user261924
    At the moment i have managed to separate the left and right channel of a WAVE file and have included the header in a byte[] array. My next step is to be about to play both channels. How can this be done? Here is a code snippet: byte[] song_left = new byte[fa.Length]; byte[] song_right = new byte[fa.Length]; int p = 0; for (int c = 0; c < 43; c++) { song_left[p] = header[c]; p++; } int q = 0; for (s = startByte; s < length; s = s + 3) { song_left[s] = sLeft[q]; q++; s++; song_left[s] = sLeft[q]; q++; } p = 0; for (int c = 0; c < 43; c++) { song_right[p] = header[c]; p++; } This part is reading the header and data from both the right and light channel and saving it to array sLeft[] and sRight[]. This part is working perfectly. Once I obtained the byte arrays, I did the following: System.IO.File.WriteAllBytes("c:\\left.wav", song_left); System.IO.File.WriteAllBytes("c:\\right.wav", song_right); Added a button to play the saved wave file: private void button2_Click(object sender, EventArgs e) { spWave = new SoundPlayer("c:\\left.wav"); spWave.Play(); } Once I hit the play button, this error appers: An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll Additional information: The wave header is corrupt. Any ideas?

    Read the article

  • Simple sound effect loop using AudioToolKit

    - by Typeoneerror
    I've created a few sounds for use in my game. I can play them at certain events without issue: // create sounds CFBundleRef mainBundle; mainBundle = CFBundleGetMainBundle(); _soundFileShake = CFBundleCopyResourceURL(mainBundle, CFSTR("shake"), CFSTR("wav"), NULL); AudioServicesCreateSystemSoundID(_soundFileShake, &_soundIdShake); // later... AudioServicesPlaySystemSound(_soundIdShake); The game has a mechanism which allows you to shake the device to activate some functionality. I've got the shaking code done so I get get a "shaking started" and "shaking ended" message to my game. What I need to have happen is start playing "shave.wav" when shaking starts and loop it until it stops. Is there a way to do this with AudioToolbox/AudioServices? How could I do this if not?

    Read the article

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • Is there an easy way to stream a m3u in iPhone?

    - by marty
    I can have a UIWebView with the .m3u file opened, which will go to the webview with a play button displayed, and that automatically goes to the quicktime player and starts playing the stream. But when I press the done button, it goes back to the UIWebView with a little play button in the middle, and from there you can go back to the previous screen (it was selected from a tableview). So I just want it to automatically load the quicktime player in the view. How can I do that?

    Read the article

  • Java M4A atom tagging free space issue

    - by Brett
    Hey, I've been trying to be able to read and write iTunes style M4A atoms and while I've successfully done the reading part, I've come to a bit of a halt in regards to the free space atoms. I figured that I should be able edit and shift the padding around to accommodate writing an atom with more data than it originally had. I've been stuck on this for about a day now, and I've been trying to figure out how to determine the closest free space atom with enough size to accommodate the new data. so far I have: private freeAtom acquireFreeSpaceAtom( long position ) { long atomStart = Long.MAX_VALUE; freeAtom atom = null; for( freeAtom a : freeSpace ) { if( Math.abs( position - atomStart ) > Math.abs( position - a.getAtomStart() ) ) atomStart = ( atom = a ).getAtomStart(); } return atom; } That code only takes into account the closest free space atom and completely disregards the fact that it should be greater than or equal to a certain size, but I can't quite figure out how I should check for both closeness and size efficiently.

    Read the article

  • How to embed mp3 into .exe file & Play it?

    - by afriza
    I am used to embed WAV into .exe and Play it using PlaySound(). However, using this method causes the .exe to become pretty big. Is it possible to do the same with MP3 files and how to do it? I have taken a look at DirectShow but it seems to be able to play from files only? I am developing for Windows Mobile 6 Series

    Read the article

  • TI-99 speech effect?

    - by kotlinski
    Hi, I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?

    Read the article

  • OpenAL device, buffer and context relationship

    - by Markus
    I'm trying to create an object oriented model to wrap OpenAL and have a little problem understanding the devices, buffers and contexts. From what I can see in the Programmer's Guide, there are multiple devices, each of which can have multiple contexts as well as multiple buffers. Each context has a listener, and the alListener*() functions all operate on the listener of the active context. (Meaning that I have to make another context active first if I wanted to change it's listener, if I got that right.) So far, so good. What irritates me though is that I need to pass a device to the alcCreateContext() function, but none to alGenBuffers(). How does this work then? When I open multiple devices, on which device are the buffers created? Are the buffers shared between all devices? What happens to the buffers if I close all open devices? (Or is there something I missed?)

    Read the article

  • Using finch first time. How to play mp3,ogg or other formats (wav files to big) ?

    - by Allisone
    My *.wav's work as expected. But wav files are to big, so I want to play *.mp3 or *.ogg but it doesn't work. I use this lines of code found in the finch Demo project engine = [[Finch alloc] init]; sitar = [[Sound alloc] initWithFile:RSRC(@"sitar.wav")]; [sitar play]; So I only change sitar.wav into my .mp3 filename. Note 1: It mustn't be mp3 or ogg, any file format not as huge as wav should be ok, but which ? Note 2: I didn't know how to use sound, so I searched and found finch here at stackoverflow. It looks easy, so I would like to use that, but if you know some other easy way to play that sound files (ambient + effects sound with compressed codec) I would also switch to that other technique.

    Read the article

  • Is it possible to programmatically edit a sound file based on frequency?

    - by K-RAN
    Just wondering if it's possible to go through a flac, mp3, wav, etc file and edit portions, or the entire file by removing sections based on a specific frequency range? So for example, I have a recording of a friend reciting a poem with a few percussion instruments in the background. Could I write a C program that goes through the entire file and cuts out everything except the vocals (human voice frequency ranges from 85-255 Hz, from what I've been reading)? Thanks in advance for any ideas!

    Read the article

  • Android - cant read TXT files from SDcard on real mashine?

    - by JustMe
    Hello! When I run the code bellow in the virtual android (1.5) it works well, TextSwitcher shows first 80 chars from each txt file from /sdcard/documents/ , but when I run it on my Samsung Galaxy i7500 (1.6) there are no contents in TextSwitcher, however in LogCat there are FileNames of txt files. My Code: public void getTxtFiles(){ //Scan /sdcard/documents and put .txt files in array File TxtFiles[] String path = Environment.getExternalStorageDirectory().toString()+"/documents/"; String files; File folder = new File(path); if(folder.exists()==false){if (!folder.mkdirs()) { Log.e("TAG", "Create dir in sdcard failed"); return; }} else{ File listOfFiles[] = folder.listFiles(); for (int i = 0; i < listOfFiles.length; i++) { if (listOfFiles[i].isFile()) { files = listOfFiles[i].getName(); if (files.endsWith(".txt") || files.endsWith(".TXT")) { if((files.length()-1)>i){resizeArray(TxtFiles, files.length()+10);} TxtFiles[i]=listOfFiles[i]; System.out.println(TxtFiles[i]); } } }} } private void updateCounter(int Pozicija) { if(Pozicija<0){Toast.makeText(getApplicationContext(), R.string.LastTxt, 5).show(); mCounter++;} else if(TxtFiles[mCounter]!=null){ TextToShow = getContents(TxtFiles[mCounter]); if(TextToShow.length()>80)TextToShow=TextToShow.substring(0, 80); mSwitcher.setText(TextToShow); System.out.println(Pozicija); } else mCounter--; } static public String getContents(File aFile) { //...checks on aFile are elided StringBuilder contents = new StringBuilder(); try { //use buffering, reading one line at a time //FileReader always assumes default encoding is OK! BufferedReader input = new BufferedReader(new FileReader(aFile)); try { String line = null; //not declared within while loop /* * readLine is a bit quirky : * it returns the content of a line MINUS the newline. * it returns null only for the END of the stream. * it returns an empty String if two newlines appear in a row. */ while (( line = input.readLine()) != null){ contents.append(line); contents.append(System.getProperty("line.separator")); } } finally { input.close(); } } catch (IOException ex){ ex.printStackTrace(); } return contents.toString(); } And I am able to write contents of those files though LogCat! Any ideas?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >