Search Results

Search found 4165 results on 167 pages for 'pulse audio'.

Page 50/167 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • android spectrum analysis of streaming input

    - by TheBeeKeeper
    for a school project I am trying to make an android application that, once started, will perform a spectrum analysis of live audio received from the microphone or a bluetooth headset. I know I should be using FFT, and have been looking at moonblink's open source audio analyzer ( http://code.google.com/p/moonblink/wiki/Audalyzer ) but am not familiar with android development, and his code is turning out to be too difficult for me to work with. So I suppose my questions are, are there any easier java based, or open source android apps that do spectrum analysis I can reference? Or is there any helpful information that can be given, such as; steps that need be taken to get the microphone input, put it into an fft algorithm, then display a graph of frequency and pitch over time from its output? Any help would be appreciated, thanks.

    Read the article

  • background music stops after 3/4 runs in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • Voices disappear when using headphones. [closed]

    - by James
    How do I declare a variable in C? P.S. I have a pair of SteelSeries Siberia headphones. I've noticed that when watching some films the voices are completely silent, yet when I unplug the headset and listen through my speakers they are there and sound normal. I have no other software that could be interfering with it and it happens regardless of the software I use for playback (I've tried VLC, WMP and Quicktime). It is so strange, and it almost sounds deliberate - the rest of the audio is untouched but voices disappear. The films only have single audio tracks, and it doesn't happen with every film. Can anyone give me any hints as to what could possibly cause this? I am stumped!

    Read the article

  • AudioOutputUnitStart takes time

    - by tokentoken
    Hello, I'm making an iPhone game application using Core Audio, Extended Audio File Services. It works OK, but when I first call AudioOutputUnitStart, it takes about 1-2 seconds. After the second call, no problem. For a game application, 1-2 seconds is very noticeable. (I tested this on iPhone simulator, and iPhone 3GS) Also, if I leave the game for about 10 seconds, first call of AudioOutputUnitStart also takes time. Maybe I have to call AudioOutputUnitStart beginning of the application to prevent the start-up time?

    Read the article

  • BufferedInputStream.read(byte[]) Causes problems. Anyone have this problem before?

    - by K-RAN
    Hello! I've written a Java program that downloads audio files for me and I'm using BufferedInputStream. The read() function works fine but is really slow so I've tried using the overloaded version using byte[]. For some reason, the audio becomes lossy and strange after download. I'm not totally sure what I'm doing wrong so any help is appreciated! Here's the simplified, sloppy version of the code. BufferedInputStream bin = new BufferedInputStream((new URL(url)).openConnection().getInputStream()); File file = new File(fileName); FileOutputStream fop = new FileOutputStream(file); int rd = bin.read(); while(rd != -1) { fop.write(rd); rd = bin.read(); }

    Read the article

  • iPad MPMoviePlayerController only hearing audio, no videos!

    - by Steph Moreau
    I am currently rebuilding my app for the iPad. I would like to play the videos sourced online. I display the information and when i go to play the video all i get is the audio... No video is shown at all. My page looks exactly the same except that i have some "background" noise. These are the same videos i use on the iPhone app and they work perfectly This is the code that i call to play my videos - (IBAction) playMovie{ NSURL *url = [NSURL URLWithString:vidMovie]; MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc]initWithContentURL:url]; [moviePlayer play]; } I am using this on a button on the right side view of a splitViewController. I get the same result in my simulator as on an iPad. Not sure if i'm missing something, but if anyone can help it would be greatly appreciated!

    Read the article

  • How to burn an Audio CD programmatically in Mac OS X

    - by Adion
    All the info I can find about burning cd's is for Windows, or about full programs to burn cd's. I would however like to be able to burn an Audio CD directly from within my program. I don't mind using Cocoa or Carbon, or if there are no API's available to do this directly, using a command-line program that can use a wav/aiff file as input would be a possibility too if it can be distributed with my application. Because it will be used to burn dj mixes to cd, it would also be great if it is possible to create different tracks without a gap between them.

    Read the article

  • Highlighting effect to text and/or image similar to be synchronized with audio

    - by Irfan Mulic
    I am looking how to approach following problem: We have application that displays text with audio recorded material. We use Browser Control (Internet Explorer) in Delphi App to do this. We respond to events in Delphi code setting innerHTML for elements if we have to update the style ... Now, request is to add option to dynamically move the cursor or dynamically highlight the words spoken from the paragraph. It doesn't need to match absolutely the exact word spoken so we will have to dynamically update the content of position of highlighted word based on some timer or something (because it is not text to speach). What should be the most practical and easy approach to this kind of problem, all answers are greatly appreciated. Thanks.

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Problem with recording audio in Flash (Red5, ffmpeg)

    - by AT
    I'm trying to implement a small program with Flash and php that records audio and converts it to mp3. Currently I have Red5 server up and running, I can connect to it with no problems and I can publish flv recordings to the server. When I listen to the flv with Wimpy FLV player it seems to be fine. The problem comes when I'm trying to convert it with ffmpeg on the command line. I'm simply using a command ffmpeg -i but the output wav is about 50% slower than the input. When I record 10sec, the output is 15sec and pitched down. I've also tried all kinds of bitrate settings, -nv option, etc. but nothing seems to work. I have a recent version of ffmpeg that supports nellymoser format.. Don't know what to do. Anyone have any ideas?

    Read the article

  • Crossfading audio with PyQT4 and Phonon

    - by dwelch
    I'm trying to get audio files to crossfade with phonon. I'm using PyQT4. I have tracks queuing properly, but I'm stuck with the fade effect. I think I need to be using the KVolumeFader effect. Here's my current code: def music_play(self): self.delayedInit() self.m_media.setCurrentSource(Phonon.MediaSource(self.playlist[self.playlist_pos])) self.m_media.play() def music_stop(self): self.m_media.stop() def delayedInit(self): if not self.m_media: self.m_media = Phonon.MediaObject(self) audioOutput = Phonon.AudioOutput(Phonon.MusicCategory, self) Phonon.createPath(self.m_media, audioOutput) def enqueueNextSource(self): if len(self.playlist) >= self.playlist_pos+1: self.playlist_pos += 1 self.m_media.enqueue(Phonon.MediaSource(self.playlist[self.playlist_pos])) else: self.m_media.stop() Can anyone give me some advice on implementing the effect?

    Read the article

  • Convert Audio File to text using System.Speech

    - by Kushal Kalambi
    I am looking to convert a .wav file recorded through an android phone at 16000 to text using C#; namely the System.Speech namespace. My code is mentioned below; recognizer.SetInputToWaveFile(Server.MapPath("~/spoken.wav")); recognizer.LoadGrammar(new DictationGrammar()); RecognitionResult result = recognizer.Recognize(); label1.Text = result.Text; The is working perfectly with sample .wav "Hello world" file. However when i record something on teh phone and try to convert to on the pc, the converted text is no where close to what i had recoreded. Is there some way to make sure the audio file is transcribed accurately?

    Read the article

  • Playing audio from a wav file in iPhone SpeakHere example

    - by Mo
    I'm working with the iPhone SpeakHere example, and I would like to be able to play audio from either the mic (as in the example) or from a wav file. I have working code to play from a particular wav file, which looks like this: NSString *path = [[NSBundle mainBundle] pathForResource:@"basketBall" ofType:@"wav"]; AVAudioPlayer* theAudio=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; theAudio.delegate = self; [theAudio play]; So I'm fine with actually getting the wav to play in the application (I can hook it up to a button, etc.) but I would like it to also behave the same way pushing the "Play" button does after recorded speech, in that it should be connected to the same visualization (which I have modified quite a bit, but essentially shows the current volume, among other things). Thanks for your help!

    Read the article

  • Linux, C++ audio capturing (just microphone) library

    - by TheOm3ga
    I'm developing a musical game, it's like a singstar but instead of singing, you have to play the recorder. It's called oFlute, and it's still in early development stage. In the game, I capture the microphone input, then run a simple FFT analysis and compare the results to typical recorder's frequencies, thus getting the played note. At the beginning, the audio library I was using was RtAudio, but I don't remember why I switched to PortAudio, which is what I'm currently using. The problem is that, from time to time, either it crashes randomly or stops capturing, like if there were no sound coming from the microphone. My question is, what's the best option to capture microphone input on Linux? I just need to open, read, and close a flow of bytes from the microphone. I've been reading this guide, and (un)surprisingly it says: I don't think that PortAudio is very good API for Unix-like operating systems. So, what do you recommend me?

    Read the article

  • Online audio stream using ruby on rails

    - by Avdept
    I'm trying to write small website that can stream audio online(radio station) and got few questions: 1. Do i have to index all my music files into database, or i can randomily pick file from file system and play it. 2. When should i use ajax to load new song(right after last finished, or few seconds before to get responce from server with link to file?) 3. Is it worth to use ajax, or better make list, that will play its full time and then start over?

    Read the article

  • How to play audio file ios

    - by Camus
    I am trying to play an audio file but I can get it working. I imported the AVFoundation framework. Here is the code: NSString *fileName = [[NSBundle mainBundle] pathForResource:@"Alarm" ofType:@"caf"]; NSURL *url = [[NSURL alloc] initFileURLWithPath:fileName]; NSLog(@"Test: %@ ", url); AVAudioPlayer *audioFile = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:NULL]; audioFile.delegate = self; audioFile.volume = 1; [audioFile play]; I am receiving an error nil string parameter I copied the file to the supporting files folder so the file is there. Can you guys help me? Thanks

    Read the article

  • Audio File continues to play even on leaving the view

    - by Swastik
    What I am doing is -(void)viewWillAppear:(BOOL)animated{ [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(clickEvent:) userInfo:nil repeats:YES]; } -(void)clickEvent:(NSTimer *)aTimer{ NSDate* finishDate = [NSDate date]; if([finishDate timeIntervalSinceDate: self.startDate] 11 && touched == NO){ NSString *mp3Path = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"test.mp3"]; [self playMusicFile:mp3Path]; NSLog(@"Timer from First Page"); [aTimer invalidate]; //[touchCheckTimer release]; aTimer = nil; } else{ } -(void)playMusicFile:(NSString *)mp3Path{ NSURL *mp3Url = [NSURL fileURLWithPath:mp3Path]; NSError *err; AVAudioPlayer *audPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:mp3Url error:&err]; [self setAudioPlayer1:audPlayer]; if(audioPlayer1) [audioPlayer1 play]; [audPlayer release]; } Now, on pushing another view this audio file keeps playing in the background. Please help!

    Read the article

  • iphone - Images (slide show) and audio snychronization

    - by Qaiser
    I have 20 images and some audio. I would like to show a single image at a time and change the images at (unequal) intervals. For example, I want to show image 1 for 1.44 seconds and image 2 for 1.67 seconds and so on. Can someone suggest how to go about doing this please? What I have seen are examples that show how to setup an array of images with one field that denotes total time. This causes the images to show for an equal amount of time (each). ... and that not what I am looking for ...

    Read the article

  • Background audio not working in windows 8 store / metro app

    - by roryok
    I've tried setting background audio through both a mediaElement in XAML <MediaElement x:Name="MyAudio" Source="Assets/Sound.mp3" AudioCategory="BackgroundCapableMedia" AutoPlay="False" /> And programmatically async void setUpAudio() { var package = Windows.ApplicationModel.Package.Current; var installedLocation = package.InstalledLocation; var storageFile = await installedLocation.GetFileAsync("Assets\\Sound.mp3"); if (storageFile != null) { var stream = await storageFile.OpenAsync(Windows.Storage.FileAccessMode.Read); _soundEffect = new MediaElement(); _soundEffect.AudioCategory = AudioCategory.BackgroundCapableMedia; _soundEffect.AutoPlay = false; _soundEffect.SetSource(stream, storageFile.ContentType); } } // and later... _soundEffect.Play(); But neither works for me. As soon as I minimise the app the music fades out

    Read the article

  • Record/Playback with AudioQueue on iPhone

    - by Biranchi
    Hi, I am currently using Audio Queues on the iPhone to record and playback audio. What I would like to be able to do is to record some audio, allow the user to pause the record queue, and to seek back and forward through the audio to select a position from where they can start recording from again. I have got over the seeking issue by making the playback AudioQueueBuffer sizes small enough so that the play audio queue callback happens at a rate that allows the user to use a slider control to hear the audio as they adjust the slider back and forth. I think I can achieve the recording at a new position by setting the inStartingPacket parameter of the AudioFileWritePackets function that I call from the Audio Recording Queue callback. The trouble is this only inserts audio over the previously recorded audio. The file length obviously doesn't change so if the user were to go backwards and record less audio than before, the old audio still remains after the end of the newly recorded audio. Is there a way I can get the AudioFile to truncate at the point the user starts to insert the new audio, is there some other way I can remove the old audio starting at the insert position or is there a better way about going about this task? Thanks

    Read the article

  • audio cd s not burning to mp3 format-burning to wav format in k3b and brasero using ubuntu 12.04.2

    - by robert
    It started in ubuntu 13.04-I was doing what I usually do,I opened brasero to make an audio cd from a few mp3 audio files..When burned I noticed the files on cd were in wav format.I then tried k3b with the same result.At that point and because of several issues with 13.04 I formatted my hdd and dropped back to ubuntu 12.04.On 12.04 I tried brasero and k3b once again with same results.I know that when I used to burn cd s using brasero they were burned to cd in mp3 format not wave.Can anyone tell me a fix for this?I have restricted codecs installed.

    Read the article

  • Using Audio Queue Services to play PCM data over a socket connection

    - by Rohan
    I'm writing a remote desktop client for the iPhone and I'm trying to implement audio redirection. The client is connected to the server over a socket connection, and the server sends 32K chunks of PCM data at a time. I'm trying to use AQS to play the data and it plays the first two seconds (1 buffer worth). However, since the next chunk of data hasn't come in over the socket yet, the next AudioQueueBuffer is empty. When the data comes in, I fill the next available buffer with the data and enqueue it with AudioQueueEnqueueBuffer. However, it never plays these buffers. Does the queue stop playing if there are no buffers in the queue, even if you later add a buffer? Here's the relevant part of the code: void wave_out_write(STREAM s, uint16 tick, uint8 index) { if(items_in_queue == NUM_BUFFERS){ return; } if(!playState.busy){ OSStatus status; status = AudioQueueNewOutput(&playState.dataFormat, AudioOutputCallback, &playState, CFRunLoopGetCurrent(), NULL, 0, &playState.queue); if(status == 0){ for(int i=0; i<NUM_BUFFERS; i++){ AudioQueueAllocateBuffer(playState.queue, 40000, &playState.buffers[i]); } AudioQueueAddPropertyListener(playState.queue, kAudioQueueProperty_IsRunning, MyAudioQueuePropertyListenerProc, &playState); status = AudioQueueStart(playState.queue, NULL); if(status ==0){ playState.busy = True; } else{ return; } } else{ return; } } playState.buffers[queue_hi]->mAudioDataByteSize = s->size; memcpy(playState.buffers[queue_hi]->mAudioData, s->data, s->size); AudioQueueEnqueueBuffer(playState.queue, playState.buffers[queue_hi], 0, 0); queue_hi++; queue_hi = queue_hi % NUM_BUFFERS; items_in_queue++; } void AudioOutputCallback(void* inUserData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer) { PlayState *playState = (PlayState *)inUserData; items_in_queue--; } Thanks!

    Read the article

  • Play and record streaming audio

    - by Igor
    I'm working on an iPhone app that should be able to play and record audio streaming data simultaneously. Is it actually possible? I'm trying to mix SpeakHere and AudioRecorder samples and getting an empty file with no audio data... Here is my .m code: import "AzRadioViewController.h" @implementation azRadioViewController static const CFOptionFlags kNetworkEvents = kCFStreamEventOpenCompleted | kCFStreamEventHasBytesAvailable | kCFStreamEventEndEncountered | kCFStreamEventErrorOccurred; void MyAudioQueueOutputCallback( void* inClientData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp inStartTime, UInt32 inNumberPacketDescriptions, const AudioStreamPacketDescription inPacketDesc ) { NSLog(@"start MyAudioQueueOutputCallback"); MyData* myData = (MyData*)inClientData; NSLog(@"--- %i", inNumberPacketDescriptions); if(inNumberPacketDescriptions == 0 && myData-dataFormat.mBytesPerPacket != 0) { inNumberPacketDescriptions = inBuffer-mAudioDataByteSize / myData-dataFormat.mBytesPerPacket; } OSStatus status = AudioFileWritePackets(myData-audioFile, FALSE, inBuffer-mAudioDataByteSize, inPacketDesc, myData-currentPacket, &inNumberPacketDescriptions, inBuffer-mAudioData); if(status == 0) { myData-currentPacket += inNumberPacketDescriptions; } NSLog(@"status:%i curpac:%i pcdesct: %i", status, myData-currentPacket, inNumberPacketDescriptions); unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer); pthread_mutex_lock(&myData-mutex); myData-inuse[bufIndex] = false; pthread_cond_signal(&myData-cond); pthread_mutex_unlock(&myData-mutex); } OSStatus StartQueueIfNeeded(MyData* myData) { NSLog(@"start StartQueueIfNeeded"); OSStatus err = noErr; if (!myData-started) { err = AudioQueueStart(myData-queue, NULL); if (err) { PRINTERROR("AudioQueueStart"); myData-failed = true; return err; } myData-started = true; printf("started\n"); } return err; } OSStatus MyEnqueueBuffer(MyData* myData) { NSLog(@"start MyEnqueueBuffer"); OSStatus err = noErr; myData-inuse[myData-fillBufferIndex] = true; AudioQueueBufferRef fillBuf = myData-audioQueueBuffer[myData-fillBufferIndex]; fillBuf-mAudioDataByteSize = myData-bytesFilled; err = AudioQueueEnqueueBuffer(myData-queue, fillBuf, myData-packetsFilled, myData-packetDescs); if (err) { PRINTERROR("AudioQueueEnqueueBuffer"); myData-failed = true; return err; } StartQueueIfNeeded(myData); return err; } void WaitForFreeBuffer(MyData* myData) { NSLog(@"start WaitForFreeBuffer"); if (++myData-fillBufferIndex = kNumAQBufs) myData-fillBufferIndex = 0; myData-bytesFilled = 0; myData-packetsFilled = 0; printf("-lock\n"); pthread_mutex_lock(&myData-mutex); while (myData-inuse[myData-fillBufferIndex]) { printf("... WAITING ...\n"); pthread_cond_wait(&myData-cond, &myData-mutex); } pthread_mutex_unlock(&myData-mutex); printf("<-unlock\n"); } int MyFindQueueBuffer(MyData* myData, AudioQueueBufferRef inBuffer) { NSLog(@"start MyFindQueueBuffer"); for (unsigned int i = 0; i < kNumAQBufs; ++i) { if (inBuffer == myData-audioQueueBuffer[i]) return i; } return -1; } void MyAudioQueueIsRunningCallback( void* inClientData, AudioQueueRef inAQ, AudioQueuePropertyID inID) { NSLog(@"start MyAudioQueueIsRunningCallback"); MyData* myData = (MyData*)inClientData; UInt32 running; UInt32 size; OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size); if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; } if (!running) { pthread_mutex_lock(&myData-mutex); pthread_cond_signal(&myData-done); pthread_mutex_unlock(&myData-mutex); } } void MyPropertyListenerProc( void * inClientData, AudioFileStreamID inAudioFileStream, AudioFileStreamPropertyID inPropertyID, UInt32 * ioFlags) { NSLog(@"start MyPropertyListenerProc"); MyData* myData = (MyData*)inClientData; OSStatus err = noErr; printf("found property '%c%c%c%c'\n", (inPropertyID24)&255, (inPropertyID16)&255, (inPropertyID8)&255, inPropertyID&255); switch (inPropertyID) { case kAudioFileStreamProperty_ReadyToProducePackets : { AudioStreamBasicDescription asbd; UInt32 asbdSize = sizeof(asbd); err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &asbdSize, &asbd); if (err) { PRINTERROR("get kAudioFileStreamProperty_DataFormat"); myData-failed = true; break; } err = AudioQueueNewOutput(&asbd, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData-queue); if (err) { PRINTERROR("AudioQueueNewOutput"); myData-failed = true; break; } for (unsigned int i = 0; i < kNumAQBufs; ++i) { err = AudioQueueAllocateBuffer(myData-queue, kAQBufSize, &myData-audioQueueBuffer[i]); if (err) { PRINTERROR("AudioQueueAllocateBuffer"); myData-failed = true; break; } } UInt32 cookieSize; Boolean writable; err = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_MagicCookieData, &cookieSize, &writable); if (err) { PRINTERROR("info kAudioFileStreamProperty_MagicCookieData"); break; } printf("cookieSize %d\n", cookieSize); void* cookieData = calloc(1, cookieSize); err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_MagicCookieData, &cookieSize, cookieData); if (err) { PRINTERROR("get kAudioFileStreamProperty_MagicCookieData"); free(cookieData); break; } err = AudioQueueSetProperty(myData-queue, kAudioQueueProperty_MagicCookie, cookieData, cookieSize); free(cookieData); if (err) { PRINTERROR("set kAudioQueueProperty_MagicCookie"); break; } err = AudioQueueAddPropertyListener(myData-queue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData); if (err) { PRINTERROR("AudioQueueAddPropertyListener"); myData-failed = true; break; } break; } } } static void ReadStreamClientCallBack(CFReadStreamRef stream, CFStreamEventType type, void *clientCallBackInfo) { NSLog(@"start ReadStreamClientCallBack"); if(type == kCFStreamEventHasBytesAvailable) { UInt8 buffer[2048]; CFIndex bytesRead = CFReadStreamRead(stream, buffer, sizeof(buffer)); if (bytesRead < 0) { } else if (bytesRead) { OSStatus err = AudioFileStreamParseBytes(globalMyData-audioFileStream, bytesRead, buffer, 0); if (err) { PRINTERROR("AudioFileStreamParseBytes"); } } } } void MyPacketsProc(void * inClientData, UInt32 inNumberBytes, UInt32 inNumberPackets, const void * inInputData, AudioStreamPacketDescription inPacketDescriptions) { NSLog(@"start MyPacketsProc"); MyData myData = (MyData*)inClientData; printf("got data. bytes: %d packets: %d\n", inNumberBytes, inNumberPackets); for (int i = 0; i < inNumberPackets; ++i) { SInt64 packetOffset = inPacketDescriptions[i].mStartOffset; SInt64 packetSize = inPacketDescriptions[i].mDataByteSize; size_t bufSpaceRemaining = kAQBufSize - myData-bytesFilled; if (bufSpaceRemaining < packetSize) { MyEnqueueBuffer(myData); WaitForFreeBuffer(myData); } AudioQueueBufferRef fillBuf = myData-audioQueueBuffer[myData-fillBufferIndex]; memcpy((char*)fillBuf-mAudioData + myData-bytesFilled, (const char*)inInputData + packetOffset, packetSize); myData-packetDescs[myData-packetsFilled] = inPacketDescriptions[i]; myData-packetDescs[myData-packetsFilled].mStartOffset = myData-bytesFilled; myData-bytesFilled += packetSize; myData-packetsFilled += 1; size_t packetsDescsRemaining = kAQMaxPacketDescs - myData-packetsFilled; if (packetsDescsRemaining == 0) { MyEnqueueBuffer(myData); WaitForFreeBuffer(myData); } } } (IBAction)buttonPlayPressedid)sender { label.text = @"Buffering"; [self connectionStart]; } (IBAction)buttonSavePressedid)sender { NSLog(@"save"); AudioFileClose(myData.audioFile); AudioQueueDispose(myData.queue, TRUE); } bool getFilename(char* buffer,int maxBufferLength) { NSArray paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString docDir = [paths objectAtIndex:0]; NSString* file = [docDir stringByAppendingString:@"/rec.caf"]; return [file getCString:buffer maxLength:maxBufferLength encoding:NSUTF8StringEncoding]; } -(void)connectionStart { @try { MyData* myData = (MyData*)calloc(1, sizeof(MyData)); globalMyData = myData; pthread_mutex_init(&myData-mutex, NULL); pthread_cond_init(&myData-cond, NULL); pthread_cond_init(&myData-done, NULL); NSLog(@"Start"); myData-dataFormat.mSampleRate = 16000.0f; myData-dataFormat.mFormatID = kAudioFormatLinearPCM; myData-dataFormat.mFramesPerPacket = 1; myData-dataFormat.mChannelsPerFrame = 1; myData-dataFormat.mBytesPerFrame = 2; myData-dataFormat.mBytesPerPacket = 2; myData-dataFormat.mBitsPerChannel = 16; myData-dataFormat.mReserved = 0; myData-dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; int i, bufferByteSize; UInt32 size; AudioQueueNewInput( &myData-dataFormat, MyAudioQueueOutputCallback, &myData, NULL /* run loop /, kCFRunLoopCommonModes / run loop mode /, 0 / flags */, &myData-queue); size = sizeof(&myData-dataFormat); AudioQueueGetProperty(&myData-queue, kAudioQueueProperty_StreamDescription, &myData-dataFormat, &size); CFURLRef fileURL; char path[256]; memset(path,0,sizeof(path)); getFilename(path,256); fileURL = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8*)path, strlen(path), FALSE); AudioFileCreateWithURL(fileURL, kAudioFileCAFType, &myData-dataFormat, kAudioFileFlags_EraseFile, &myData-audioFile); OSStatus err = AudioFileStreamOpen(myData, MyPropertyListenerProc, MyPacketsProc, kAudioFileMP3Type, &myData-audioFileStream); if (err) { PRINTERROR("AudioFileStreamOpen"); return 1; } CFStreamClientContext ctxt = {0, self, NULL, NULL, NULL}; CFStringRef bodyData = CFSTR(""); // Usually used for POST data CFStringRef headerFieldName = CFSTR("X-My-Favorite-Field"); CFStringRef headerFieldValue = CFSTR("Dreams"); CFStringRef url = CFSTR(RADIO_LOCATION); CFURLRef myURL = CFURLCreateWithString(kCFAllocatorDefault, url, NULL); CFStringRef requestMethod = CFSTR("GET"); CFHTTPMessageRef myRequest = CFHTTPMessageCreateRequest(kCFAllocatorDefault, requestMethod, myURL, kCFHTTPVersion1_1); CFHTTPMessageSetBody(myRequest, bodyData); CFHTTPMessageSetHeaderFieldValue(myRequest, headerFieldName, headerFieldValue); CFReadStreamRef stream = CFReadStreamCreateForHTTPRequest(kCFAllocatorDefault, myRequest); if (!stream) { NSLog(@"Creating the stream failed"); return; } if (!CFReadStreamSetClient(stream, kNetworkEvents, ReadStreamClientCallBack, &ctxt)) { CFRelease(stream); NSLog(@"Setting the stream's client failed."); return; } CFReadStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes); if (!CFReadStreamOpen(stream)) { CFReadStreamSetClient(stream, 0, NULL, NULL); CFReadStreamUnscheduleFromRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes); CFRelease(stream); NSLog(@"Opening the stream failed."); return; } } @catch (NSException *exception) { NSLog(@"main: Caught %@: %@", [exception name], [exception reason]); } } (void)viewDidLoad { [[UIApplication sharedApplication] setIdleTimerDisabled:YES]; [super viewDidLoad]; } (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } (void)viewDidUnload { } (void)dealloc { [super dealloc]; } @end

    Read the article

  • Stopping and Play button for Audio (Android)

    - by James Rattray
    I have this problem, I have some audio I wish to play... And I have two buttons for it, 'Play' and 'Stop'... Problem is, after I press the stop button, and then press the Play button, nothing happens. -The stop button stops the song, but I want the Play button to play the song again (from the start) Here is my code: final MediaPlayer mp = MediaPlayer.create(this, R.raw.megadeth); And then the two public onclicks: (For playing...) button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // Perform action on click button.setText("Playing!"); try { mp.prepare(); } catch (IllegalStateException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } mp.start(); // } }); And for stopping the track... final Button button2 = (Button) findViewById(R.id.cancel); button2.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { mp.stop(); mp.reset(); } }); Can anyone see the problem with this? If so could you please fix it... (For suggest) Thanks alot... James

    Read the article

  • WPF Storyboard delay in playing wma files

    - by Rita
    I'm a complete beginner in WPF and have an app that uses StoryBoard to play a sound. public void PlaySound() { MediaElement m = (MediaElement)audio.FindName("MySound.wma"); m.IsMuted = false; FrameworkElement audioKey = (FrameworkElement)keys.FindName("MySound"); Storyboard s = (Storyboard)audioKey.FindResource("MySound.wma"); s.Begin(audioKey); } <Storyboard x:Key="MySound.wma"> <MediaTimeline d:DesignTimeNaturalDuration="1.615" BeginTime="00:00:00" Storyboard.TargetName="MySound.wma" Source="Audio\MySound.wma"/> </Storyboard> I have a horrible lag and sometimes it takes good 10 seconds for the sound to be played. I suspect this has something to do with the fact that no matter how long I wait - The sound doesn't get played until after I leave the function. I don't understand it. I call Begin, and nothing happens. Is there a way to replace this method, or StoryBoard object with something that plays instantly and without a lag?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >