Search Results

Search found 7178 results on 288 pages for 'audio playing'.

Page 66/288 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • getting error while converting wav to amr using ffmpeg

    - by sohilvassa
    hello friends I am using ffmpeg to convert amr to wav and wav to amr.Its successfully converting amr to wav but not viceversa. As ffmpeg is supporting amr encoder decoder, its giving error. ffmpeg -i testwav.wav audio.amr (working fine) Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

  • How to open wav file with Lua

    - by Pete Webbo
    Hello, I am trying to do some wav processing using Lua, but have fallen a the first hurdle! I cannot find a function or library that will allow me to load a wav file and access the raw data. There is one library, but it onl allows playing of wavs, not access to the raw data. Are there any out there? Cheers, Pete.

    Read the article

  • iPhone: how to make key click sound for custom keypad?

    - by Kaffeine Coma
    Is there a way to programmatically invoke the keypad "click" sound? My app has a custom keypad (built out of UIButtons) and I'd like to provide some audio feedback when the user taps on the keys. I tried creating my own sounds in Garageband, but wasn't happy with any of my creations. If there isn't a standard way to invoke the key click, can anyone point me to a library of sounds that might have such a gem?

    Read the article

  • PlaySystemSound with mute switch on

    - by Sam V
    I know, I have to set the AudioSession to the 'playback' category, which allows audio even when the mute switch is on. This is what I do, but sound still gets muted when switch is on. UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback; AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(sessionCategory), &sessionCategory); SystemSoundID soundID; NSString *path = [[NSBundle mainBundle] pathForResource:soundString ofType:@"wav"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:path],&soundID); AudioServicesPlaySystemSound (soundID);

    Read the article

  • Blackberry buffered playback demo??

    - by Bohemian
    Can someone help me to buffer a mp3 file on a server using the Blackberry buffered pllayback demo app provided with the jde? I hav loaded it in the simulator. And my mds is started but I m unable to play the audio. There is no error but it doesnt play/load. The code looks all fine. Thanks

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • I write bad wave files using Java

    - by Cliff
    I'm writing out wave files in Java using AudioInputStream output = new AudioInputStream(new ByteArrayInputStream(rawPCMSamples), new AudioFormat(22000,16,1,true,false), rawPCMSamples.length) AudioSystem.write(output, AudioFileFormat.Type.WAVE, new FileOutputStream('somefile.wav')) And I get what appears to be corrupt wave files on OSX. They won't play from Finder however using the same code behind a servlet writing directly to the response stream and setting the Content-Type to audio/wave seems to play fine in quicktime. What gives?

    Read the article

  • HOW-TO Make computer sing

    - by Ofir
    Hi, I'm trying to develop an online application where the user writes some text and the software sings it back to the user. I can currently generate the audio file with the words spoken by the computer using espeak, but I have no idea how to make it sound like a song, how to add rhythm to it. I'm able to change the pitch and tempo using rubberband, but that's as far as I've gotten. Does anyone have a clue how to make this happen?

    Read the article

  • calculating time duration of a file

    - by RV
    Dupe of calculate playing time of a .mp3 file im reading a audio file(for ex:wav,mp3 etc) and get a long value as duration.now i want to convert that long value into correct time duration(like,00:05:32)

    Read the article

  • How to programmatically generate an MP3 podcast file with chapters and text track?

    - by adib
    Hi Anybody know how to programmatically generate MP3 files with bookmarks that can be used in iTunes / iPod / iPhone / iPod touch? Specifically text bookmarks (bookmarks with titles) that the listener can skip to a specific point in time in the audio file. Also how to add the text transcription of the podcast's content. Even better if you have an example Cocoa code or library to write the MP3 file. Thanks.

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • Is there an easy way to stream a m3u in iPhone?

    - by marty
    I can have a UIWebView with the .m3u file opened, which will go to the webview with a play button displayed, and that automatically goes to the quicktime player and starts playing the stream. But when I press the done button, it goes back to the UIWebView with a little play button in the middle, and from there you can go back to the previous screen (it was selected from a tableview). So I just want it to automatically load the quicktime player in the view. How can I do that?

    Read the article

  • Simple wave generator with SDL in c++

    - by Vlad Popescu
    i am having problems understanding how the audio part of the sdl library works now, i know that when you initialize it, you have to specify the frequency and a callback<< function, which i think is then called automatically at the given frequency. can anyone who worked with the sdl library write a simple example that would use sdl_audio to generate a 440 hz square wave (since it is the simplest waveform) at a sampling frequency of 44000 hz? thanks in advance

    Read the article

  • Best way to play wav files in the browser?

    - by Splatzone
    I have no choice but to play wav files directly in the browser (serverside encoding to mp3 isn't an option, unfortunately.) What's the best way to do this? I'd really like to take advantage of the HTML 5 audio tag but my target audience includes many, many teens using IE6. As far as I'm aware flash isn't an option, but speedy playback really is critical. Thanks.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >