Search Results

Search found 5088 results on 204 pages for '3 5mm audio jack'.

Page 64/204 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • Is there an easy way to stream a m3u in iPhone?

    - by marty
    I can have a UIWebView with the .m3u file opened, which will go to the webview with a play button displayed, and that automatically goes to the quicktime player and starts playing the stream. But when I press the done button, it goes back to the UIWebView with a little play button in the middle, and from there you can go back to the previous screen (it was selected from a tableview). So I just want it to automatically load the quicktime player in the view. How can I do that?

    Read the article

  • Java M4A atom tagging free space issue

    - by Brett
    Hey, I've been trying to be able to read and write iTunes style M4A atoms and while I've successfully done the reading part, I've come to a bit of a halt in regards to the free space atoms. I figured that I should be able edit and shift the padding around to accommodate writing an atom with more data than it originally had. I've been stuck on this for about a day now, and I've been trying to figure out how to determine the closest free space atom with enough size to accommodate the new data. so far I have: private freeAtom acquireFreeSpaceAtom( long position ) { long atomStart = Long.MAX_VALUE; freeAtom atom = null; for( freeAtom a : freeSpace ) { if( Math.abs( position - atomStart ) > Math.abs( position - a.getAtomStart() ) ) atomStart = ( atom = a ).getAtomStart(); } return atom; } That code only takes into account the closest free space atom and completely disregards the fact that it should be greater than or equal to a certain size, but I can't quite figure out how I should check for both closeness and size efficiently.

    Read the article

  • How to embed mp3 into .exe file & Play it?

    - by afriza
    I am used to embed WAV into .exe and Play it using PlaySound(). However, using this method causes the .exe to become pretty big. Is it possible to do the same with MP3 files and how to do it? I have taken a look at DirectShow but it seems to be able to play from files only? I am developing for Windows Mobile 6 Series

    Read the article

  • OpenAL device, buffer and context relationship

    - by Markus
    I'm trying to create an object oriented model to wrap OpenAL and have a little problem understanding the devices, buffers and contexts. From what I can see in the Programmer's Guide, there are multiple devices, each of which can have multiple contexts as well as multiple buffers. Each context has a listener, and the alListener*() functions all operate on the listener of the active context. (Meaning that I have to make another context active first if I wanted to change it's listener, if I got that right.) So far, so good. What irritates me though is that I need to pass a device to the alcCreateContext() function, but none to alGenBuffers(). How does this work then? When I open multiple devices, on which device are the buffers created? Are the buffers shared between all devices? What happens to the buffers if I close all open devices? (Or is there something I missed?)

    Read the article

  • TI-99 speech effect?

    - by kotlinski
    Hi, I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?

    Read the article

  • Using finch first time. How to play mp3,ogg or other formats (wav files to big) ?

    - by Allisone
    My *.wav's work as expected. But wav files are to big, so I want to play *.mp3 or *.ogg but it doesn't work. I use this lines of code found in the finch Demo project engine = [[Finch alloc] init]; sitar = [[Sound alloc] initWithFile:RSRC(@"sitar.wav")]; [sitar play]; So I only change sitar.wav into my .mp3 filename. Note 1: It mustn't be mp3 or ogg, any file format not as huge as wav should be ok, but which ? Note 2: I didn't know how to use sound, so I searched and found finch here at stackoverflow. It looks easy, so I would like to use that, but if you know some other easy way to play that sound files (ambient + effects sound with compressed codec) I would also switch to that other technique.

    Read the article

  • Android - cant read TXT files from SDcard on real mashine?

    - by JustMe
    Hello! When I run the code bellow in the virtual android (1.5) it works well, TextSwitcher shows first 80 chars from each txt file from /sdcard/documents/ , but when I run it on my Samsung Galaxy i7500 (1.6) there are no contents in TextSwitcher, however in LogCat there are FileNames of txt files. My Code: public void getTxtFiles(){ //Scan /sdcard/documents and put .txt files in array File TxtFiles[] String path = Environment.getExternalStorageDirectory().toString()+"/documents/"; String files; File folder = new File(path); if(folder.exists()==false){if (!folder.mkdirs()) { Log.e("TAG", "Create dir in sdcard failed"); return; }} else{ File listOfFiles[] = folder.listFiles(); for (int i = 0; i < listOfFiles.length; i++) { if (listOfFiles[i].isFile()) { files = listOfFiles[i].getName(); if (files.endsWith(".txt") || files.endsWith(".TXT")) { if((files.length()-1)>i){resizeArray(TxtFiles, files.length()+10);} TxtFiles[i]=listOfFiles[i]; System.out.println(TxtFiles[i]); } } }} } private void updateCounter(int Pozicija) { if(Pozicija<0){Toast.makeText(getApplicationContext(), R.string.LastTxt, 5).show(); mCounter++;} else if(TxtFiles[mCounter]!=null){ TextToShow = getContents(TxtFiles[mCounter]); if(TextToShow.length()>80)TextToShow=TextToShow.substring(0, 80); mSwitcher.setText(TextToShow); System.out.println(Pozicija); } else mCounter--; } static public String getContents(File aFile) { //...checks on aFile are elided StringBuilder contents = new StringBuilder(); try { //use buffering, reading one line at a time //FileReader always assumes default encoding is OK! BufferedReader input = new BufferedReader(new FileReader(aFile)); try { String line = null; //not declared within while loop /* * readLine is a bit quirky : * it returns the content of a line MINUS the newline. * it returns null only for the END of the stream. * it returns an empty String if two newlines appear in a row. */ while (( line = input.readLine()) != null){ contents.append(line); contents.append(System.getProperty("line.separator")); } } finally { input.close(); } } catch (IOException ex){ ex.printStackTrace(); } return contents.toString(); } And I am able to write contents of those files though LogCat! Any ideas?

    Read the article

  • Is it possible to programmatically edit a sound file based on frequency?

    - by K-RAN
    Just wondering if it's possible to go through a flac, mp3, wav, etc file and edit portions, or the entire file by removing sections based on a specific frequency range? So for example, I have a recording of a friend reciting a poem with a few percussion instruments in the background. Could I write a C program that goes through the entire file and cuts out everything except the vocals (human voice frequency ranges from 85-255 Hz, from what I've been reading)? Thanks in advance for any ideas!

    Read the article

  • save and play recorded sound

    - by blacksheep
    i'd like to save and play again this recorded sounds: @interface Recorder : NSObject { NSMutableArray *times; NSMutableArray *samples; } @end @implementation Recorder – (id) init { [super init]; times = [[NSMutableArray alloc] init]; samples = [[NSMutableArray alloc] init]; return self; } – (void) recordSound: (id) someSound { CFAbsoluteTime now = CFAbsoluteTimeGetCurrent(); NSNumber *wrappedTime = [NSNumber numberWithDouble:now]; [times addObject:wrappedTime]; [samples addObject:someSound]; } @end thanx blacksheep

    Read the article

  • flash.media.Sound.play takes long time to return

    - by Fire Lancer
    I'm trying to play some sounds in my flash project through action script. However for some reason in my code the call to Sound.play takes from 40ms to over 100ms in extreme cases, which is obviously more than enough to be very noticeable whenever a sound is played. This happens every time a sound is played, not just when that sound is first played, so I dont think its because the Sound object is still loading data or anything like that... At the start I have this to load the sound: class MyClass { [Embed(source='data/test_snd.mp3')] private var TestSound:Class; private var testSound:Sound;//flash.media.Sound public function MyClass() { testSound = new TestSound(); } Then im just using the play method of the sound object to play it later on. testSound.play();//seems to take a long time to return This as far as I can tell is following the same process as other Flash programs I found, however none of them seem to have this problem. Is there something that I've missed that would cause the play() method to be so slow?

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Access files (.wav) in java package

    - by Highmastdon
    I want to access my .wav files which are in a package inside my project. For example I got two packages: package program package sounds From inside the program/something.class I'd like to play the sounds/asound.wav. How is this possible. clip.open(AudioSystem.getAudioInputStream(new File(filename))); clip.start(); //.... something inbetween clip.stop(); Here filename is C:\\projects\\something\\sounds\\, but how is it possible to just give a relative path to the asound.wav in the package?

    Read the article

  • How to convert m4a file to aac adts file in Xcode?

    - by Bird Hsuie
    I have a mp4 file copied from iPod lib and saved to my Document for my next step, I need it to convert to .mp3 or .aac(ADTS type) I use this code and failed... -(IBAction)compressFile:(id)sender{ NSLog (@"handleConvertToPCMTapped"); // open an ExtAudioFile NSLog (@"opening %@", exportURL); ExtAudioFileRef inputFile; CheckResult (ExtAudioFileOpenURL((__bridge CFURLRef)exportURL, &inputFile), "ExtAudioFileOpenURL failed"); // prepare to convert to a plain ol' PCM format AudioStreamBasicDescription myPCMFormat; myPCMFormat.mSampleRate = 44100; // todo: or use source rate? myPCMFormat.mFormatID = kAudioFormatMPEGLayer3 ; myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical; myPCMFormat.mChannelsPerFrame = 2; myPCMFormat.mFramesPerPacket = 1; myPCMFormat.mBitsPerChannel = 16; myPCMFormat.mBytesPerPacket = 4; myPCMFormat.mBytesPerFrame = 4; CheckResult (ExtAudioFileSetProperty(inputFile, kExtAudioFileProperty_ClientDataFormat, sizeof (myPCMFormat), &myPCMFormat), "ExtAudioFileSetProperty failed"); // allocate a big buffer. size can be arbitrary for ExtAudioFile. // you have 64 KB to spare, right? UInt32 outputBufferSize = 0x10000; void* ioBuf = malloc (outputBufferSize); UInt32 sizePerPacket = myPCMFormat.mBytesPerPacket; UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket; // set up output file NSString *outputPath = [myDocumentsDirectory() stringByAppendingPathComponent:@"m_export.mp3"]; NSURL *outputURL = [NSURL fileURLWithPath:outputPath]; NSLog (@"creating output file %@", outputURL); AudioFileID outputFile; CheckResult(AudioFileCreateWithURL((__bridge CFURLRef)outputURL, kAudioFileCAFType, &myPCMFormat, kAudioFileFlags_EraseFile, &outputFile), "AudioFileCreateWithURL failed"); // start convertin' UInt32 outputFilePacketPosition = 0; //in bytes while (true) { // wrap the destination buffer in an AudioBufferList AudioBufferList convertedData; convertedData.mNumberBuffers = 1; convertedData.mBuffers[0].mNumberChannels = myPCMFormat.mChannelsPerFrame; convertedData.mBuffers[0].mDataByteSize = outputBufferSize; convertedData.mBuffers[0].mData = ioBuf; UInt32 frameCount = packetsPerBuffer; // read from the extaudiofile CheckResult (ExtAudioFileRead(inputFile, &frameCount, &convertedData), "Couldn't read from input file"); if (frameCount == 0) { printf ("done reading from file"); break; } // write the converted data to the output file CheckResult (AudioFileWritePackets(outputFile, false, frameCount, NULL, outputFilePacketPosition / myPCMFormat.mBytesPerPacket, &frameCount, convertedData.mBuffers[0].mData), "Couldn't write packets to file"); NSLog (@"Converted %ld bytes", outputFilePacketPosition); // advance the output file write location outputFilePacketPosition += (frameCount * myPCMFormat.mBytesPerPacket); } // clean up ExtAudioFileDispose(inputFile); AudioFileClose(outputFile); // show size in label NSLog (@"checking file at %@", outputPath); [self transMitFile:outputPath]; if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath]) { NSError *fileManagerError = nil; unsigned long long fileSize = [[[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:&fileManagerError] fileSize]; } any suggestion?.......thanks for your great help!

    Read the article

  • Play mp3 stream from http URL on Windows Mobile 6.0

    - by Thyphuong
    After a short period of time learning about how to play a mp3 http url on windows mobile 6.0, I found that very less dll support that (until now, I just found out Bass.dll work nice). So I intend to change to another way to approach the goal. Here's my idea: Get a stream from http url. Decode the mp3 stream. Play the result from step 2. Coz I'm new on this field, so feel free and explain to me what I'm wrong and/or show me the way.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >