Search Results

Search found 5073 results on 203 pages for 'audio jack'.

Page 50/203 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Prefered method for looping sound flash as3

    - by Brian Heylin
    Hi there, I'm having some issues with looping a sound in flash AS3, in that when I tell the sound to loop I get a slight delay at the end/beginning of the audio. The audio is clipped correctly and will play without a gap on garage band. I know that there are issues with sound in general in flash, bugs with encodings and the inaccuracies with the SOUND_COMPLETE event (And Adobe should be embarrassed with their handling of these issues) I have tried to use the built in loop argument in the play method on the Sound class and also react on the SOUND_COMPLETE event, but both cause a delay. But has anyone come up with a technique for looping a sound without any noticeable gap?

    Read the article

  • Signal amplitude against time in Java

    - by wsr74ws84
    I'm racking my brain in order to solve a knotty problem (at least for me). While playing an audio file (using Java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope (spectrum analyzer). The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time). Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know very little about Java, I hope someone can help me.

    Read the article

  • Html5 - Callback when media is ready on iPad wont work

    - by Kap
    I'm trying to add a callback to a HTML5 audio element on an iPad. I added an eventlistener to the element, the myOtherThing() starts but there is no sound. If I pause and the play the sound again the audio starts. This works in chrome. Does anyone have an idea how I can do this? myAudioElement.src = "path_to_file"; addEventListener("canplay", function(){ myAudioElement.play(); myOtherThing.start(); }); SOLVED Just wanted to share my solution here, just in case someone else needs it. As far as I understand the iPad does not trigger any events without user interactions. So to be able to use "canply", "playing" and all the other events you need to use the built in media controller. Once you press play in that controller, the events gets triggered. After that you can use your custom interface.

    Read the article

  • As an IT contractor, is it better to be a specialist or a jack-of-all-trades? [on hold]

    - by alimac83
    I've just entered the contracting market as a web developer and I've having a tough time figuring out how to plan for the future. Several developers I've worked with in the past have told me to become a specialist in one technology/area in order to secure the big contracts. However I've also heard from other sources that it's better to spread your expertise so that you're not limited in the types of work you can go for. Personally I've pretty much been involved in both back and front-end technologies during the course of my career, with slight variations in the weighting of each depending on the job. I don't really have a favourite - I enjoy it all. My question is mainly to the experienced contractors though: Do you feel specialising has helped your career or is it better to know a bit of everything? Thanks

    Read the article

  • Is there a way to make the speaker silent while the headphone-jack keeps working? 12.04LTS

    - by Cees
    The PC I am working on is in a loud environment. If I need sound, I use the headphone. On my own account this is easy: I mute the speaker in the sound-setting. I am not the only user, others use the Guest-session. And that's what this question about: Is it possible to turn off the speaker by default on a guest-session AND leave the headphone-output working? If yes, how can I fix it? I tried to loosen the speaker (hardware) connection but it is soldered to the mainboard. The soundcard on the PC is: HDA Intel at 0xfea78000 irq 44 /proc/asound/pcm ---------------------------+ ¦00-00: ALC662 rev1 Analog : ALC662 rev1 Analog : playback 1 : capture 1¦ ¦00-02: ALC662 rev1 Analog : ALC662 rev1 Analog : capture 1 Ubuntu 12.04LTS is running on the system, my account has all the (admin) rights

    Read the article

  • Question on ExtAudioFileRead and AudioBuffer for iPhone SDK

    - by backspacer
    I'm developing an iPhone app that uses the Extended Audio File Services. I try to use ExtAudioFileRead to read the audio file, and store the data in an AudioBufferList structure. AudioBufferList is defined as: struct AudioBufferList { UInt32 mNumberBuffers; AudioBuffer mBuffers[1]; }; typedef struct AudioBufferList AudioBufferList; and AudioBuffer is defined as struct AudioBuffer { UInt32 mNumberChannels; UInt32 mDataByteSize; void* mData; }; typedef struct AudioBuffer AudioBuffer; I want to manipulate the mData but I wonder what does the void* mean. Why is it void*? How can I decide what data type is actually stored in mData?

    Read the article

  • Can FLV AAC stream be played in Android

    - by HariKJ
    Hi, I'm trying to build a radio player and the client is providing a stream which is a FLV container with the audio being AAC When I read the headers it shows up as audio/aacp. I have tried all possible ways such as using the 1) Streaming through mediaplayer (Does not work) 2) Use the NPR mode of using a proxy stream (I get a broken pipe exception) 3) Play it in chunks ( Plays but I need the SDCard and the playback is not very great) 4) Use the GPL'd FAAD2 Library but I would have to pay the royalty fee Can some one help me out on figuring this issue out. The last option that I have is to have my client change the stream to mp3 container (which I know that it works) Regards, Hari

    Read the article

  • sound loop breaks after some time in background music in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • android spectrum analysis of streaming input

    - by TheBeeKeeper
    for a school project I am trying to make an android application that, once started, will perform a spectrum analysis of live audio received from the microphone or a bluetooth headset. I know I should be using FFT, and have been looking at moonblink's open source audio analyzer ( http://code.google.com/p/moonblink/wiki/Audalyzer ) but am not familiar with android development, and his code is turning out to be too difficult for me to work with. So I suppose my questions are, are there any easier java based, or open source android apps that do spectrum analysis I can reference? Or is there any helpful information that can be given, such as; steps that need be taken to get the microphone input, put it into an fft algorithm, then display a graph of frequency and pitch over time from its output? Any help would be appreciated, thanks.

    Read the article

  • background music stops after 3/4 runs in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • Voices disappear when using headphones. [closed]

    - by James
    How do I declare a variable in C? P.S. I have a pair of SteelSeries Siberia headphones. I've noticed that when watching some films the voices are completely silent, yet when I unplug the headset and listen through my speakers they are there and sound normal. I have no other software that could be interfering with it and it happens regardless of the software I use for playback (I've tried VLC, WMP and Quicktime). It is so strange, and it almost sounds deliberate - the rest of the audio is untouched but voices disappear. The films only have single audio tracks, and it doesn't happen with every film. Can anyone give me any hints as to what could possibly cause this? I am stumped!

    Read the article

  • AudioOutputUnitStart takes time

    - by tokentoken
    Hello, I'm making an iPhone game application using Core Audio, Extended Audio File Services. It works OK, but when I first call AudioOutputUnitStart, it takes about 1-2 seconds. After the second call, no problem. For a game application, 1-2 seconds is very noticeable. (I tested this on iPhone simulator, and iPhone 3GS) Also, if I leave the game for about 10 seconds, first call of AudioOutputUnitStart also takes time. Maybe I have to call AudioOutputUnitStart beginning of the application to prevent the start-up time?

    Read the article

  • BufferedInputStream.read(byte[]) Causes problems. Anyone have this problem before?

    - by K-RAN
    Hello! I've written a Java program that downloads audio files for me and I'm using BufferedInputStream. The read() function works fine but is really slow so I've tried using the overloaded version using byte[]. For some reason, the audio becomes lossy and strange after download. I'm not totally sure what I'm doing wrong so any help is appreciated! Here's the simplified, sloppy version of the code. BufferedInputStream bin = new BufferedInputStream((new URL(url)).openConnection().getInputStream()); File file = new File(fileName); FileOutputStream fop = new FileOutputStream(file); int rd = bin.read(); while(rd != -1) { fop.write(rd); rd = bin.read(); }

    Read the article

  • iPad MPMoviePlayerController only hearing audio, no videos!

    - by Steph Moreau
    I am currently rebuilding my app for the iPad. I would like to play the videos sourced online. I display the information and when i go to play the video all i get is the audio... No video is shown at all. My page looks exactly the same except that i have some "background" noise. These are the same videos i use on the iPhone app and they work perfectly This is the code that i call to play my videos - (IBAction) playMovie{ NSURL *url = [NSURL URLWithString:vidMovie]; MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc]initWithContentURL:url]; [moviePlayer play]; } I am using this on a button on the right side view of a splitViewController. I get the same result in my simulator as on an iPad. Not sure if i'm missing something, but if anyone can help it would be greatly appreciated!

    Read the article

  • How to burn an Audio CD programmatically in Mac OS X

    - by Adion
    All the info I can find about burning cd's is for Windows, or about full programs to burn cd's. I would however like to be able to burn an Audio CD directly from within my program. I don't mind using Cocoa or Carbon, or if there are no API's available to do this directly, using a command-line program that can use a wav/aiff file as input would be a possibility too if it can be distributed with my application. Because it will be used to burn dj mixes to cd, it would also be great if it is possible to create different tracks without a gap between them.

    Read the article

  • Highlighting effect to text and/or image similar to be synchronized with audio

    - by Irfan Mulic
    I am looking how to approach following problem: We have application that displays text with audio recorded material. We use Browser Control (Internet Explorer) in Delphi App to do this. We respond to events in Delphi code setting innerHTML for elements if we have to update the style ... Now, request is to add option to dynamically move the cursor or dynamically highlight the words spoken from the paragraph. It doesn't need to match absolutely the exact word spoken so we will have to dynamically update the content of position of highlighted word based on some timer or something (because it is not text to speach). What should be the most practical and easy approach to this kind of problem, all answers are greatly appreciated. Thanks.

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Problem with recording audio in Flash (Red5, ffmpeg)

    - by AT
    I'm trying to implement a small program with Flash and php that records audio and converts it to mp3. Currently I have Red5 server up and running, I can connect to it with no problems and I can publish flv recordings to the server. When I listen to the flv with Wimpy FLV player it seems to be fine. The problem comes when I'm trying to convert it with ffmpeg on the command line. I'm simply using a command ffmpeg -i but the output wav is about 50% slower than the input. When I record 10sec, the output is 15sec and pitched down. I've also tried all kinds of bitrate settings, -nv option, etc. but nothing seems to work. I have a recent version of ffmpeg that supports nellymoser format.. Don't know what to do. Anyone have any ideas?

    Read the article

  • Crossfading audio with PyQT4 and Phonon

    - by dwelch
    I'm trying to get audio files to crossfade with phonon. I'm using PyQT4. I have tracks queuing properly, but I'm stuck with the fade effect. I think I need to be using the KVolumeFader effect. Here's my current code: def music_play(self): self.delayedInit() self.m_media.setCurrentSource(Phonon.MediaSource(self.playlist[self.playlist_pos])) self.m_media.play() def music_stop(self): self.m_media.stop() def delayedInit(self): if not self.m_media: self.m_media = Phonon.MediaObject(self) audioOutput = Phonon.AudioOutput(Phonon.MusicCategory, self) Phonon.createPath(self.m_media, audioOutput) def enqueueNextSource(self): if len(self.playlist) >= self.playlist_pos+1: self.playlist_pos += 1 self.m_media.enqueue(Phonon.MediaSource(self.playlist[self.playlist_pos])) else: self.m_media.stop() Can anyone give me some advice on implementing the effect?

    Read the article

  • Convert Audio File to text using System.Speech

    - by Kushal Kalambi
    I am looking to convert a .wav file recorded through an android phone at 16000 to text using C#; namely the System.Speech namespace. My code is mentioned below; recognizer.SetInputToWaveFile(Server.MapPath("~/spoken.wav")); recognizer.LoadGrammar(new DictationGrammar()); RecognitionResult result = recognizer.Recognize(); label1.Text = result.Text; The is working perfectly with sample .wav "Hello world" file. However when i record something on teh phone and try to convert to on the pc, the converted text is no where close to what i had recoreded. Is there some way to make sure the audio file is transcribed accurately?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >