Search Results

Search found 4698 results on 188 pages for 'audio recording'.

Page 56/188 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Novocaine - How to loop file playback? (iOS)

    - by lppier
    I'm using Novocaine by alexbw Novocaine for my audio project. I'm playing around with the example code here for file reading. The file plays back with no problem. I would like to loop this recording with the gap between the loops - any suggestion as to how I can do so? Thanks. Pier. // AUDIO FILE READING OHHH YEAHHHH // ======================================== NSArray *pathComponents = [NSArray arrayWithObjects: [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject], @"testrecording.wav", nil]; NSURL *inputFileURL = [NSURL fileURLWithPathComponents:pathComponents]; NSLog(@"URL: %@", inputFileURL); fileReader = [[AudioFileReader alloc] initWithAudioFileURL:inputFileURL samplingRate:audioManager.samplingRate numChannels:audioManager.numOutputChannels]; [fileReader play]; [fileReader setCurrentTime:0.0]; //float duration = fileReader.getDuration; [audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels) { [fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels]; NSLog(@"Time: %f", [fileReader getCurrentTime]); }];

    Read the article

  • How Do I Convert text to a WAV file With Inaudible Waveform?

    - by Scott
    I am trying to create an audio watermarking system. I figure the best solution is to create an audio file (WAV) based on a unique string of text and then combine this with the original wav. The part that makes this tricky (for me anyway) is: How do I convert the text string to a wav? How do I ensure that the resulting WAV form is inaudible (or at least barely noticeable to the listener). I would prefer this be done server side (via PHP, etc) but if the processing load isn't too much then would be ok with something in Flash or Javascript. I'd be willing to pay someone to create me a workable solution (complete source code that functions as described). Thanks, Scott!

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Signal amplitude against time in Java

    - by wsr74ws84
    I'm racking my brain in order to solve a knotty problem (at least for me). While playing an audio file (using Java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope (spectrum analyzer). The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time). Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know very little about Java, I hope someone can help me.

    Read the article

  • Prefered method for looping sound flash as3

    - by Brian Heylin
    Hi there, I'm having some issues with looping a sound in flash AS3, in that when I tell the sound to loop I get a slight delay at the end/beginning of the audio. The audio is clipped correctly and will play without a gap on garage band. I know that there are issues with sound in general in flash, bugs with encodings and the inaccuracies with the SOUND_COMPLETE event (And Adobe should be embarrassed with their handling of these issues) I have tried to use the built in loop argument in the play method on the Sound class and also react on the SOUND_COMPLETE event, but both cause a delay. But has anyone come up with a technique for looping a sound without any noticeable gap?

    Read the article

  • Html5 - Callback when media is ready on iPad wont work

    - by Kap
    I'm trying to add a callback to a HTML5 audio element on an iPad. I added an eventlistener to the element, the myOtherThing() starts but there is no sound. If I pause and the play the sound again the audio starts. This works in chrome. Does anyone have an idea how I can do this? myAudioElement.src = "path_to_file"; addEventListener("canplay", function(){ myAudioElement.play(); myOtherThing.start(); }); SOLVED Just wanted to share my solution here, just in case someone else needs it. As far as I understand the iPad does not trigger any events without user interactions. So to be able to use "canply", "playing" and all the other events you need to use the built in media controller. Once you press play in that controller, the events gets triggered. After that you can use your custom interface.

    Read the article

  • Question on ExtAudioFileRead and AudioBuffer for iPhone SDK

    - by backspacer
    I'm developing an iPhone app that uses the Extended Audio File Services. I try to use ExtAudioFileRead to read the audio file, and store the data in an AudioBufferList structure. AudioBufferList is defined as: struct AudioBufferList { UInt32 mNumberBuffers; AudioBuffer mBuffers[1]; }; typedef struct AudioBufferList AudioBufferList; and AudioBuffer is defined as struct AudioBuffer { UInt32 mNumberChannels; UInt32 mDataByteSize; void* mData; }; typedef struct AudioBuffer AudioBuffer; I want to manipulate the mData but I wonder what does the void* mean. Why is it void*? How can I decide what data type is actually stored in mData?

    Read the article

  • sound loop breaks after some time in background music in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • Can FLV AAC stream be played in Android

    - by HariKJ
    Hi, I'm trying to build a radio player and the client is providing a stream which is a FLV container with the audio being AAC When I read the headers it shows up as audio/aacp. I have tried all possible ways such as using the 1) Streaming through mediaplayer (Does not work) 2) Use the NPR mode of using a proxy stream (I get a broken pipe exception) 3) Play it in chunks ( Plays but I need the SDCard and the playback is not very great) 4) Use the GPL'd FAAD2 Library but I would have to pay the royalty fee Can some one help me out on figuring this issue out. The last option that I have is to have my client change the stream to mp3 container (which I know that it works) Regards, Hari

    Read the article

  • android spectrum analysis of streaming input

    - by TheBeeKeeper
    for a school project I am trying to make an android application that, once started, will perform a spectrum analysis of live audio received from the microphone or a bluetooth headset. I know I should be using FFT, and have been looking at moonblink's open source audio analyzer ( http://code.google.com/p/moonblink/wiki/Audalyzer ) but am not familiar with android development, and his code is turning out to be too difficult for me to work with. So I suppose my questions are, are there any easier java based, or open source android apps that do spectrum analysis I can reference? Or is there any helpful information that can be given, such as; steps that need be taken to get the microphone input, put it into an fft algorithm, then display a graph of frequency and pitch over time from its output? Any help would be appreciated, thanks.

    Read the article

  • background music stops after 3/4 runs in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • Voices disappear when using headphones. [closed]

    - by James
    How do I declare a variable in C? P.S. I have a pair of SteelSeries Siberia headphones. I've noticed that when watching some films the voices are completely silent, yet when I unplug the headset and listen through my speakers they are there and sound normal. I have no other software that could be interfering with it and it happens regardless of the software I use for playback (I've tried VLC, WMP and Quicktime). It is so strange, and it almost sounds deliberate - the rest of the audio is untouched but voices disappear. The films only have single audio tracks, and it doesn't happen with every film. Can anyone give me any hints as to what could possibly cause this? I am stumped!

    Read the article

  • AudioOutputUnitStart takes time

    - by tokentoken
    Hello, I'm making an iPhone game application using Core Audio, Extended Audio File Services. It works OK, but when I first call AudioOutputUnitStart, it takes about 1-2 seconds. After the second call, no problem. For a game application, 1-2 seconds is very noticeable. (I tested this on iPhone simulator, and iPhone 3GS) Also, if I leave the game for about 10 seconds, first call of AudioOutputUnitStart also takes time. Maybe I have to call AudioOutputUnitStart beginning of the application to prevent the start-up time?

    Read the article

  • BufferedInputStream.read(byte[]) Causes problems. Anyone have this problem before?

    - by K-RAN
    Hello! I've written a Java program that downloads audio files for me and I'm using BufferedInputStream. The read() function works fine but is really slow so I've tried using the overloaded version using byte[]. For some reason, the audio becomes lossy and strange after download. I'm not totally sure what I'm doing wrong so any help is appreciated! Here's the simplified, sloppy version of the code. BufferedInputStream bin = new BufferedInputStream((new URL(url)).openConnection().getInputStream()); File file = new File(fileName); FileOutputStream fop = new FileOutputStream(file); int rd = bin.read(); while(rd != -1) { fop.write(rd); rd = bin.read(); }

    Read the article

  • iPad MPMoviePlayerController only hearing audio, no videos!

    - by Steph Moreau
    I am currently rebuilding my app for the iPad. I would like to play the videos sourced online. I display the information and when i go to play the video all i get is the audio... No video is shown at all. My page looks exactly the same except that i have some "background" noise. These are the same videos i use on the iPhone app and they work perfectly This is the code that i call to play my videos - (IBAction) playMovie{ NSURL *url = [NSURL URLWithString:vidMovie]; MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc]initWithContentURL:url]; [moviePlayer play]; } I am using this on a button on the right side view of a splitViewController. I get the same result in my simulator as on an iPad. Not sure if i'm missing something, but if anyone can help it would be greatly appreciated!

    Read the article

  • How to burn an Audio CD programmatically in Mac OS X

    - by Adion
    All the info I can find about burning cd's is for Windows, or about full programs to burn cd's. I would however like to be able to burn an Audio CD directly from within my program. I don't mind using Cocoa or Carbon, or if there are no API's available to do this directly, using a command-line program that can use a wav/aiff file as input would be a possibility too if it can be distributed with my application. Because it will be used to burn dj mixes to cd, it would also be great if it is possible to create different tracks without a gap between them.

    Read the article

  • Highlighting effect to text and/or image similar to be synchronized with audio

    - by Irfan Mulic
    I am looking how to approach following problem: We have application that displays text with audio recorded material. We use Browser Control (Internet Explorer) in Delphi App to do this. We respond to events in Delphi code setting innerHTML for elements if we have to update the style ... Now, request is to add option to dynamically move the cursor or dynamically highlight the words spoken from the paragraph. It doesn't need to match absolutely the exact word spoken so we will have to dynamically update the content of position of highlighted word based on some timer or something (because it is not text to speach). What should be the most practical and easy approach to this kind of problem, all answers are greatly appreciated. Thanks.

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Crossfading audio with PyQT4 and Phonon

    - by dwelch
    I'm trying to get audio files to crossfade with phonon. I'm using PyQT4. I have tracks queuing properly, but I'm stuck with the fade effect. I think I need to be using the KVolumeFader effect. Here's my current code: def music_play(self): self.delayedInit() self.m_media.setCurrentSource(Phonon.MediaSource(self.playlist[self.playlist_pos])) self.m_media.play() def music_stop(self): self.m_media.stop() def delayedInit(self): if not self.m_media: self.m_media = Phonon.MediaObject(self) audioOutput = Phonon.AudioOutput(Phonon.MusicCategory, self) Phonon.createPath(self.m_media, audioOutput) def enqueueNextSource(self): if len(self.playlist) >= self.playlist_pos+1: self.playlist_pos += 1 self.m_media.enqueue(Phonon.MediaSource(self.playlist[self.playlist_pos])) else: self.m_media.stop() Can anyone give me some advice on implementing the effect?

    Read the article

  • Convert Audio File to text using System.Speech

    - by Kushal Kalambi
    I am looking to convert a .wav file recorded through an android phone at 16000 to text using C#; namely the System.Speech namespace. My code is mentioned below; recognizer.SetInputToWaveFile(Server.MapPath("~/spoken.wav")); recognizer.LoadGrammar(new DictationGrammar()); RecognitionResult result = recognizer.Recognize(); label1.Text = result.Text; The is working perfectly with sample .wav "Hello world" file. However when i record something on teh phone and try to convert to on the pc, the converted text is no where close to what i had recoreded. Is there some way to make sure the audio file is transcribed accurately?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >