Search Results

Search found 6501 results on 261 pages for 'audio conversion'.

Page 82/261 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • iPhone 3.5mm jack based application

    - by maverick
    I want to encode data via a DTMF encoder and send it back to the iPhone via the 3.5mm Jack. Is it possible to send data back into the 3.5mm jack. conventionally audio signals are sent out over the iPhone 3.5mm jack? Is there provision to deal with DTMF and 3.5mm jack based input applications in Iphone's External Accessory framework?

    Read the article

  • getting error while converting wav to amr using ffmpeg

    - by sohilvassa
    hello friends I am using ffmpeg to convert amr to wav and wav to amr.Its successfully converting amr to wav but not viceversa. As ffmpeg is supporting amr encoder decoder, its giving error. ffmpeg -i testwav.wav audio.amr (working fine) Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height

    Read the article

  • iPhone: how to make key click sound for custom keypad?

    - by Kaffeine Coma
    Is there a way to programmatically invoke the keypad "click" sound? My app has a custom keypad (built out of UIButtons) and I'd like to provide some audio feedback when the user taps on the keys. I tried creating my own sounds in Garageband, but wasn't happy with any of my creations. If there isn't a standard way to invoke the key click, can anyone point me to a library of sounds that might have such a gem?

    Read the article

  • PlaySystemSound with mute switch on

    - by Sam V
    I know, I have to set the AudioSession to the 'playback' category, which allows audio even when the mute switch is on. This is what I do, but sound still gets muted when switch is on. UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback; AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(sessionCategory), &sessionCategory); SystemSoundID soundID; NSString *path = [[NSBundle mainBundle] pathForResource:soundString ofType:@"wav"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:path],&soundID); AudioServicesPlaySystemSound (soundID);

    Read the article

  • Blackberry buffered playback demo??

    - by Bohemian
    Can someone help me to buffer a mp3 file on a server using the Blackberry buffered pllayback demo app provided with the jde? I hav loaded it in the simulator. And my mds is started but I m unable to play the audio. There is no error but it doesnt play/load. The code looks all fine. Thanks

    Read the article

  • Creating a music catalog in C# and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs. Thanks, Rad

    Read the article

  • I write bad wave files using Java

    - by Cliff
    I'm writing out wave files in Java using AudioInputStream output = new AudioInputStream(new ByteArrayInputStream(rawPCMSamples), new AudioFormat(22000,16,1,true,false), rawPCMSamples.length) AudioSystem.write(output, AudioFileFormat.Type.WAVE, new FileOutputStream('somefile.wav')) And I get what appears to be corrupt wave files on OSX. They won't play from Finder however using the same code behind a servlet writing directly to the response stream and setting the Content-Type to audio/wave seems to play fine in quicktime. What gives?

    Read the article

  • HOW-TO Make computer sing

    - by Ofir
    Hi, I'm trying to develop an online application where the user writes some text and the software sings it back to the user. I can currently generate the audio file with the words spoken by the computer using espeak, but I have no idea how to make it sound like a song, how to add rhythm to it. I'm able to change the pitch and tempo using rubberband, but that's as far as I've gotten. Does anyone have a clue how to make this happen?

    Read the article

  • calculating time duration of a file

    - by RV
    Dupe of calculate playing time of a .mp3 file im reading a audio file(for ex:wav,mp3 etc) and get a long value as duration.now i want to convert that long value into correct time duration(like,00:05:32)

    Read the article

  • How to programmatically generate an MP3 podcast file with chapters and text track?

    - by adib
    Hi Anybody know how to programmatically generate MP3 files with bookmarks that can be used in iTunes / iPod / iPhone / iPod touch? Specifically text bookmarks (bookmarks with titles) that the listener can skip to a specific point in time in the audio file. Also how to add the text transcription of the podcast's content. Even better if you have an example Cocoa code or library to write the MP3 file. Thanks.

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • Simple wave generator with SDL in c++

    - by Vlad Popescu
    i am having problems understanding how the audio part of the sdl library works now, i know that when you initialize it, you have to specify the frequency and a callback<< function, which i think is then called automatically at the given frequency. can anyone who worked with the sdl library write a simple example that would use sdl_audio to generate a 440 hz square wave (since it is the simplest waveform) at a sampling frequency of 44000 hz? thanks in advance

    Read the article

  • Best way to play wav files in the browser?

    - by Splatzone
    I have no choice but to play wav files directly in the browser (serverside encoding to mp3 isn't an option, unfortunately.) What's the best way to do this? I'd really like to take advantage of the HTML 5 audio tag but my target audience includes many, many teens using IE6. As far as I'm aware flash isn't an option, but speedy playback really is critical. Thanks.

    Read the article

  • playing two streams on android

    - by Yanush
    Hello, I'm looking for a way (or at least to be pointed in the right direction) to play two streams of audio on android simultaneously but each on a different channel (e.g one in the speaker and one through the headphones) I'm not even sure its possible hardware wise. Any thoughts, clues ? Y

    Read the article

  • how to get the css keys and values for any html tag

    - by artsince
    I would like to dump all css key/value pairs for an html tag. In particular, I would like to learn the css properties for <audio> tag, so I can try to customize the look. document.getElementById('myaudio').style returns a CSSStyleDeclaration object but length returns 0 and I cannot figure out to iterate over the key/value pairs. Thank you

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >