Search Results

Search found 5073 results on 203 pages for 'audio jack'.

Page 12/203 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • MPlayer refuses to generate mono wav file

    - by JCCyC
    I want to downsample an existing audio file to 8KHz mono. This command line downsamples it to stereo: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1 -ao pcm:waveheader:file="/tmp/blah1.wav" ~/from_my_cellphone.3ga It generates a file that the file utility identifies as stereo: $ file /tmp/blah1.wav /tmp/blah1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 8000 Hz Now, if I read the documentation correctly, I should add pan=1:0.5:0.5 so I get a file that's half the size: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1:pan=1:0.5:0.5 -ao pcm:waveheader:file="/tmp/blah2.wav" ~/from_my_cellphone.3ga But it doesn't! blah2.wav is identical to blah1.wav! What am I doing wrong?

    Read the article

  • Reading audio with Extended Audio File Services (ExtAudioFileRead)

    - by Paperflyer
    I am working on understanding Core Audio, or rather: Extended Audio File Services Here, I want to use ExtAudioFileRead() to read some audio data from a file. This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer). As soon as I use more than one AudioBuffer, ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the audioBufferList. I can not use one huge buffer because then, dataByteSize would overflow its UInt32-integer range with huge files. Here is the code to create the audioBufferList: AudioBufferList *audioBufferList; audioBufferList = malloc(sizeof(AudioBufferList) + (numBuffers-1)*sizeof(AudioBuffer)); audioBufferList->mNumberBuffers = numBuffers; for (int bufferIdx = 0; bufferIdx<numBuffers; bufferIdx++ ) { audioBufferList->mBuffers[bufferIdx].mNumberChannels = numChannels; audioBufferList->mBuffers[bufferIdx].mDataByteSize = dataByteSize; audioBufferList->mBuffers[bufferIdx].mData = malloc(dataByteSize); } UInt32 numFrames = fileLengthInFrames; error = ExtAudioFileRead(extAudioFileRef, &numFrames, audioBufferList); Do you know what I am doing wrong here?

    Read the article

  • iPhone Audio Queue Service sample units

    - by pion
    I am looking at Audio Queue Services document specifically on the following code: // Writing an audio queue buffer to disk AudioFileWritePackets ( // 1 pAqData->mAudioFile, // 2 false, // 3 inBuffer->mAudioDataByteSize, // 4 inPacketDesc, // 5 pAqData->mCurrentPacket, // 6 &inNumPackets, // 7 inBuffer->mAudioData // 8 ); inBuffer-mAudioDataByteSize is the number of bytes of audio data being written. inBuffer-mAudioData is the new audio data to write to the audio file. Assuming the sample rate is 44100. AudioStreamBasicDescription mDataFormat; mDataFormat.mSampleRate = 44100.0f; mDataFormat.mBitsPerChannel = 16; ... NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2; SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData; I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice. Beginner questions: What does the audioSample represent? What am I looking at here? What is the unit of audioSample? What do I need to do if I just want to plot the range between 50 - 100 Hz? Thanks in advance for your help.

    Read the article

  • Progressive download using Matt Gallagher's audio streamer

    - by Fernando Valente
    I'm a completely n00b when talking about audio. I'm using Matt Gallagher's audio streamer on my radio app. How may I use progressive download? Also, ExtAudioFile is a good idea too :) Edit: Used this: length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize); if(!data) data =[[NSMutableData alloc] initWithLength:0]; [data appendData:[NSData dataWithBytes:bytes length:kAQDefaultBufSize]]; Now I can save the audio data using writeToFile:atomically: NSData method, but the audio won't play. Also, if I try to load it on a AVAudioPlayer, I get an error.

    Read the article

  • audio processing in iPhone

    - by Janaka
    I am writing an iPhone application to apply filters to audio input and output the result in real time. I am new to audio processing but using audiounit, the correct approach? I found out how to output data using audiounit but couldn’t figure out how to capture input audio. Is there a sample application showing how to connect input and output using audiounit?

    Read the article

  • options for producing audio with GWT

    - by Kaffeine Coma
    What options are there for producing audio in a GWT app? I'm thinking of making a simple game, but I'm disappointed to see that there's still not much progress on audio support directly in GWT (yes, I realize that's largely due to lack of underlying browser support; looking forward to HTML5!) This blog post says that "audio support in GWT is rapidly evolving", yet I don't see updates in over a year, at least not at that site. It seems these are the available options: GWT Voices GWT SoundManager GWT Sound GWT Incubator I believe most of these (all of them?) rely on Flash to produce audio. I'm most inclined to go with the GWT Incubator, as that's where features slated for inclusion in GWT get started, but I've no real recommendations to go on. I would appreciate hearing about your experiences with any of these libraries, thanks.

    Read the article

  • html5 audio player - jquery toggle click play/pause???

    - by mathiregister
    hello guys, i wonder what i'm doing wrong? $('.player_audio').click(function() { if ($('.player_audio').paused == false) { $('.player_audio').pause(); alert('music paused'); } else { $('.player_audio').play(); alert('music playing'); } }); i can't seem to start the audio track if i hit the "player_audio" tag. <div class='thumb audio'><audio class='player_audio' src='$path/$value'></audio></div> any idea what i'm doing wrong or what i have to do to get it working?

    Read the article

  • help me pick the right iPhone audio class - MPMoviePlayer vs AVAudioPlayer vs MPMusicPlayer

    - by huevos de oro
    Does anyone know of a good tutorial on the distinction between the MPMoviePlayer vs AVAudioPlayer vs MPMusicPlayer? I want to play audio from an mp3 file available at an external URL. Ideally it is played in an iPod-like audio view. I toyed with MPMoviePlayer but it appears to be more suitable for video, as when audio starts a "movie playing" message displays, the controls disappear and a white quicktime splash page displays. I would like the standard ipod audio controls to display all the time, and to customize the image behind them.

    Read the article

  • Fake "user initiated" <audio> tag on iPad

    - by Alex Ford
    I know that Apple's docs say that an mp3 within an <audio tag on iPhone OS can't be played without user intervention (they cite bandwidth concerns, totally reasonable). However, has anyone succeeded in faking a user action to play the audio? Perhaps faking events to off screen native audio controls with JavaScript? I'm using jPlayer right now which works great on desktop Safari, but is silent on my iPad. I'm prototyping a touch interface using WebKit on the iPad, and audio is an integral part of the experience, so yes, I do have a good reason to want to override this convention. I'd appreciate any help. Thanks.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Audio processing in C# or C++

    - by melculetz
    Hi, I would like to create an application that uses AI techniques and allows the user to record a part of a song and then tries to find that song in a database of wav files. I would have liked to use some already existing libraries for the audio processing part. So, could you recommend any libraries in C# which can read a wav file, get input from microphone, have some audio filters (low pass, high pass, FFT etc) and maybe have the ability to plot the audio signal as well. I would prefer to develop in C#, but if there aren't good libraries for audio processing, I guess I could work in C++ as well. As far as I know, Mathlab already has the above mentioned functionalities, but I can't use it in my application.

    Read the article

  • Howto play video with external audio in Silverlight?

    - by Fury
    Hi all, Is there any proper method to play synchronously video and external audio, other than simply having two MediaElement (one for video source and one for audio) started simultaneously? I need to play video with different soundtracks, but I belive that just two separated MediaElements will be out of sync at some point of time. Maybe there is some way to add audio source to the existing MediaElement with video? Platform: SL3, but SL4 will be good as well. Thanks in advance.

    Read the article

  • How to initialize audio with Vala/SDL

    - by ioev
    I've been trying to figure this out for a few hours now. In order to start up the audio, I need to create an SDL.AudioSpec object and pass it to SDL.Audio.Open. The problem is, AudioSpec is a class with a private constructor, so when I try to create one I get: sdl.vala:18.25-18.43: error: `SDL.AudioSpec' does not have a default constructor AudioSpec audiospec = new SDL.AudioSpec(); ^^^^^^^^^^^^^^^^^^^ And if I try to just assign values to it's member vars like a struct (it's a struct in normal sdl) I get: sdl.vala:20.3-20.25: error: use of possibly unassigned local variable `audiospec' audiospec.freq = 22050; ^^^^^^^^^^^^^^^^^^^^^^^ I found the valac doc here: http://valadoc.org/sdl/SDL.AudioSpec.html But it isn't much help at all. The offending code block looks like this: // setup the audio configuration AudioSpec audiospec; AudioSpec specback; audiospec.freq = 22050; audiospec.format = SDL.AudioFormat.S16LSB; audiospec.channels = 2; audiospec.samples = 512; // try to initialize sound with these values if (SDL.Audio.open(audiospec, specback) < 0) { stdout.printf("ERROR! Check audio settings!\n"); return 1; } Any help would be greatly appreciated!

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • Xcode best audio practices

    - by Zachary Webert
    What is the best practice for creating an audio prompt within my app, which will append different portions of audio together to ask a question? ex. "What is" + "foo"? "What is" + "bar"? I have developed a "AudioQueue" object using audiotool box which uses AudioServicesPlaySystemSound() and it is working perfectly. Is there anything wrong with playing this type of audio through the alert system. If so what are my alternatives?? Thank you

    Read the article

  • HTML5 audio with PHP script does not work on iPad/Iphone

    - by saulob
    Ok, I'm trying to play an HTML audio code on iPad but does not work. I created one PHP script to send to the MP3 request to the HTML5 audio code mp3_file_player.php?n=mp3file.mp3 The player is here: http://www.avault.com/news/podcast-news/john-romero-podcast-episode-80/ You will see that works on every HTML5 supported browser even on my iPod Touch. But does not work on iPad/iPhone, even on Safari on Mac OSX (I tried on Safari/Windows, worked fine) This is my PHP code: header("X-Powered-By: "); header("Accept-Ranges: bytes"); header("Content-Length: ". (string)(filesize($episode_filename)) .""); header("Content-type: audio/mpeg"); readfile($episode_filename); exit(); Everything works fine, the MP3 has the same headers like reading the mp3 directly. HTTP Headers from direct file access: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:31 GMT Server Apache/2.2.9 Last-Modified Wed, 26 May 2010 13:39:19 GMT Etag "dac0039-41d91f8-4877f669cefc0" Accept-Ranges bytes Content-Length 50656162 Content-Range bytes 18390614-69046775/69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg HTTP Header from my PHP script: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:08 GMT Server Apache/2.2.9 Accept-Ranges bytes Content-Length 69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg The only thing different it's the Content-Range, I even tried to add it, but if I use it the player will not work on my Ipod Touch. So I removed. Thank you very much.

    Read the article

  • is background audio playing enabled in iPhone?

    - by Nareshkumar
    I was able to play the audio in background of the application in iPhone. However I would like to know if there is any service that enables the playback of audio after the user exits the application? I know that SDK 4.0 promises multitasking and background processes of the application. But i would like to know if this is enabled for audio playback in the earlier versions?

    Read the article

  • MVC Serve audio files while preventing direct linking using HttpResponseBase

    - by VinceGeek
    I need to be able to serve audio files to an mvc app while preventing direct access. Ideally the page would render with a player control so the user can start/stop the audio linked to the database record (audio files are in a folder not the db). I have a controller action like this: Response.Clear(); Response.ContentType = "audio/wav"; Response.TransmitFile(audioFilename); Response.End(); return Response; and the view uses the RenderAction method <% Html.RenderAction("ServeAudioFile"); %> this works but it won't display inline on the existing view, it opens a new page with just the media control. Am I totally barking up the wrong tree or is there a way to embed the response in the existing view? works exactly as I would like but I can't control access to the file.

    Read the article

  • Determining the best audio quality.

    - by The Rook
    How can you determine the best audio quality in a list of audio files, with out looking at the audio file's header. What if all of the files came from differnt encoding types and they where all transcoded to the same format and bit rate.

    Read the article

  • How are files (especially audio files) organized internally?

    - by mystify
    I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read. So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff. Is that correct? Does it apply to any kind of file? Is this how files actually look like?

    Read the article

  • Windows 7 Audio tracks messed up

    - by Crash893
    I'm not an audio guy so it might not be the most articulate description of the problem I'm having but it seems like ever since I went to Windows 7 some video (netflix and some youtube) the background track (music sound effects etc) plays appropriately but the foreground track (i.e. the actors and/or narrator) barely comes in at all Right now I have just a simple set of PC speakers and some times a pair of headphones that plug into the PC speakers (no fancy 5.1) I've looked at every setting I can think of but I can't find anything that could be causing this Ive uninstalled the driver and reinstalled and I still get the same results so I think its a software issue any ideas?

    Read the article

  • Linux command to concatenate audio files and output them to ogg

    - by hasen j
    What command-line tools do I need in order to concatenate several audio files and output them as one ogg (and/or mp3)? If you can provide the complete command to concatenate and output to ogg, that would be awesome. Edit: Input files (in my case, currently) are in wma format, but ideally it should be flexible enough to support a wide range of popular formats. Edit2: Just to clarify, I don't want to merge all wmas in a certain directory, I just want to concatenate 2 or 3 files into one. Thanks for the proposed solutions, but they all seem to require creating temporary files, if possible at all, I'd like to avoid that.

    Read the article

  • My 5.1 audio system not working properly when I connect to Internet

    - by Gemboz
    I just bought a Genuis 5005 5.1 audio system, and I have an issue with it: It won't work properly when I get on the Internet (the central speaker isn't working). I also bought an Asus Xonar DG 5.1 sound card which works perfectly. I installed it and also installed the driver for it. When I play music/videos from my hard drive, all the speakers work fine, but when I connected to Internet to play videos/music, the center speaker won't work. How can I fix this issue?

    Read the article

  • converting huge MPEG audio files to something smaller

    - by john
    I've got some large MPEG audio files (144 MB each) that I'm looking to convert to something smaller so I can send them out as attachments to an email. Any suggestions on the software to use? I'm looking for something free that will run on Windows. I don't really care what the destination file is, mp3 would be nice. If there's a web service out there that would do this without the need to download any software to my machine, that would be even better, but I would be more than happy just getting it done any way I can. Thanks!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >