Search Results

Search found 4816 results on 193 pages for 'audio redirection'.

Page 12/193 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • windows live playback left and right audio channel

    - by user1254761
    I have a multichannel (4x stereo) audiocard (m-audio delta1010lt) and want to playback /playthru some of the channels live. But I am only able to playback/playthru the left channel on each stereo-input (CH1, CH3, CH5, CH7). For CH2,CH4,CH6,CH8 I see the Windows Volume-Indicator going up and down in the Windows Record-Audiosettings but I don't hear any playback sound. Is there a way to playback/playthru all input channels?

    Read the article

  • Split audio into tracks?

    - by Mark
    I've recorded some music from an internet radio stream. I want to split it into separate audio tracks, preferably automatically (where ever there's a pause, or using some other clever algo). Anyone know of some free software that can do this?

    Read the article

  • Audio organizing via CLI

    - by Radek Šimko
    I'm looking for some software for my OpenSUSE, which with I would be able to organize my audio files. I've found one, which may be good, but it's unable to run without X server (in CLI). http://musicbrainz.org/doc/MusicBrainz_Picard I'm not looking for ID3 renamers. There're maybe hundreds of them... I'm looking for software, which has its own database, or is able to communicate with some database, like CDDB, Gracenote, last.fm etc.

    Read the article

  • Linux: grab audio from a video clip

    - by liori
    Hello, I'd like to take audio track from a video clip in FLV container and save it to something playable by portable music players. Are there any easy to use tools for that? I know how to do that using console tools (mplayer+lame/oggenc), but I'd like to get something clickable, preferably for GNOME. Thanks!

    Read the article

  • Serving a video and audio upload and streaming intense site

    - by Pollux Khafra
    I'm about to launch a new site that allows user to both upload/stream audio and video and I don't know anything about the server side of things. My original plan was to just use a dedicated server through Hostgator but from what I'm reading, Cloud hosting or Load balanced clustered is the best way to go for what Im trying to do. All the articles seem to have an agenda to sell you on an affiliate web host so how do I really need to do this?

    Read the article

  • MPlayer refuses to generate mono wav file

    - by JCCyC
    I want to downsample an existing audio file to 8KHz mono. This command line downsamples it to stereo: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1 -ao pcm:waveheader:file="/tmp/blah1.wav" ~/from_my_cellphone.3ga It generates a file that the file utility identifies as stereo: $ file /tmp/blah1.wav /tmp/blah1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 8000 Hz Now, if I read the documentation correctly, I should add pan=1:0.5:0.5 so I get a file that's half the size: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1:pan=1:0.5:0.5 -ao pcm:waveheader:file="/tmp/blah2.wav" ~/from_my_cellphone.3ga But it doesn't! blah2.wav is identical to blah1.wav! What am I doing wrong?

    Read the article

  • Reading audio with Extended Audio File Services (ExtAudioFileRead)

    - by Paperflyer
    I am working on understanding Core Audio, or rather: Extended Audio File Services Here, I want to use ExtAudioFileRead() to read some audio data from a file. This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer). As soon as I use more than one AudioBuffer, ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the audioBufferList. I can not use one huge buffer because then, dataByteSize would overflow its UInt32-integer range with huge files. Here is the code to create the audioBufferList: AudioBufferList *audioBufferList; audioBufferList = malloc(sizeof(AudioBufferList) + (numBuffers-1)*sizeof(AudioBuffer)); audioBufferList->mNumberBuffers = numBuffers; for (int bufferIdx = 0; bufferIdx<numBuffers; bufferIdx++ ) { audioBufferList->mBuffers[bufferIdx].mNumberChannels = numChannels; audioBufferList->mBuffers[bufferIdx].mDataByteSize = dataByteSize; audioBufferList->mBuffers[bufferIdx].mData = malloc(dataByteSize); } UInt32 numFrames = fileLengthInFrames; error = ExtAudioFileRead(extAudioFileRef, &numFrames, audioBufferList); Do you know what I am doing wrong here?

    Read the article

  • iPhone Audio Queue Service sample units

    - by pion
    I am looking at Audio Queue Services document specifically on the following code: // Writing an audio queue buffer to disk AudioFileWritePackets ( // 1 pAqData->mAudioFile, // 2 false, // 3 inBuffer->mAudioDataByteSize, // 4 inPacketDesc, // 5 pAqData->mCurrentPacket, // 6 &inNumPackets, // 7 inBuffer->mAudioData // 8 ); inBuffer-mAudioDataByteSize is the number of bytes of audio data being written. inBuffer-mAudioData is the new audio data to write to the audio file. Assuming the sample rate is 44100. AudioStreamBasicDescription mDataFormat; mDataFormat.mSampleRate = 44100.0f; mDataFormat.mBitsPerChannel = 16; ... NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2; SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData; I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice. Beginner questions: What does the audioSample represent? What am I looking at here? What is the unit of audioSample? What do I need to do if I just want to plot the range between 50 - 100 Hz? Thanks in advance for your help.

    Read the article

  • Progressive download using Matt Gallagher's audio streamer

    - by Fernando Valente
    I'm a completely n00b when talking about audio. I'm using Matt Gallagher's audio streamer on my radio app. How may I use progressive download? Also, ExtAudioFile is a good idea too :) Edit: Used this: length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize); if(!data) data =[[NSMutableData alloc] initWithLength:0]; [data appendData:[NSData dataWithBytes:bytes length:kAQDefaultBufSize]]; Now I can save the audio data using writeToFile:atomically: NSData method, but the audio won't play. Also, if I try to load it on a AVAudioPlayer, I get an error.

    Read the article

  • audio processing in iPhone

    - by Janaka
    I am writing an iPhone application to apply filters to audio input and output the result in real time. I am new to audio processing but using audiounit, the correct approach? I found out how to output data using audiounit but couldn’t figure out how to capture input audio. Is there a sample application showing how to connect input and output using audiounit?

    Read the article

  • options for producing audio with GWT

    - by Kaffeine Coma
    What options are there for producing audio in a GWT app? I'm thinking of making a simple game, but I'm disappointed to see that there's still not much progress on audio support directly in GWT (yes, I realize that's largely due to lack of underlying browser support; looking forward to HTML5!) This blog post says that "audio support in GWT is rapidly evolving", yet I don't see updates in over a year, at least not at that site. It seems these are the available options: GWT Voices GWT SoundManager GWT Sound GWT Incubator I believe most of these (all of them?) rely on Flash to produce audio. I'm most inclined to go with the GWT Incubator, as that's where features slated for inclusion in GWT get started, but I've no real recommendations to go on. I would appreciate hearing about your experiences with any of these libraries, thanks.

    Read the article

  • html5 audio player - jquery toggle click play/pause???

    - by mathiregister
    hello guys, i wonder what i'm doing wrong? $('.player_audio').click(function() { if ($('.player_audio').paused == false) { $('.player_audio').pause(); alert('music paused'); } else { $('.player_audio').play(); alert('music playing'); } }); i can't seem to start the audio track if i hit the "player_audio" tag. <div class='thumb audio'><audio class='player_audio' src='$path/$value'></audio></div> any idea what i'm doing wrong or what i have to do to get it working?

    Read the article

  • help me pick the right iPhone audio class - MPMoviePlayer vs AVAudioPlayer vs MPMusicPlayer

    - by huevos de oro
    Does anyone know of a good tutorial on the distinction between the MPMoviePlayer vs AVAudioPlayer vs MPMusicPlayer? I want to play audio from an mp3 file available at an external URL. Ideally it is played in an iPod-like audio view. I toyed with MPMoviePlayer but it appears to be more suitable for video, as when audio starts a "movie playing" message displays, the controls disappear and a white quicktime splash page displays. I would like the standard ipod audio controls to display all the time, and to customize the image behind them.

    Read the article

  • Fake "user initiated" <audio> tag on iPad

    - by Alex Ford
    I know that Apple's docs say that an mp3 within an <audio tag on iPhone OS can't be played without user intervention (they cite bandwidth concerns, totally reasonable). However, has anyone succeeded in faking a user action to play the audio? Perhaps faking events to off screen native audio controls with JavaScript? I'm using jPlayer right now which works great on desktop Safari, but is silent on my iPad. I'm prototyping a touch interface using WebKit on the iPad, and audio is an integral part of the experience, so yes, I do have a good reason to want to override this convention. I'd appreciate any help. Thanks.

    Read the article

  • How do I stop redirection after form submittion?

    - by Noor
    I had a similar question posted here a few hours ago, just now I got the answer that I should look into using AJAX to do this. Since I want to complete this part of the site today I can't afford to learn AJAX from the basics to do this now.. This shouldn't be something difficult and I thougt that I would be able to do this but I'm not skilled enough... I have a form, when you click submit, it posts to twitter.com/statuses/update.xml and I need to be able to do so without being redirected there. Is there an easy way to do this or do I need to learn AJAX? Thankfull for any answer at all..! edit: I'm using this to submit: $(function() { $("#skikka").click(function() { var dendar = "http://" + $("#usernam").val() + ":" + $("#passwo").val() + "@twitter.com/statuses/update.xml"; $("#formen").attr("action", dendar); $("#formen").submit(); alert(dendar); return false; }); });

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Audio processing in C# or C++

    - by melculetz
    Hi, I would like to create an application that uses AI techniques and allows the user to record a part of a song and then tries to find that song in a database of wav files. I would have liked to use some already existing libraries for the audio processing part. So, could you recommend any libraries in C# which can read a wav file, get input from microphone, have some audio filters (low pass, high pass, FFT etc) and maybe have the ability to plot the audio signal as well. I would prefer to develop in C#, but if there aren't good libraries for audio processing, I guess I could work in C++ as well. As far as I know, Mathlab already has the above mentioned functionalities, but I can't use it in my application.

    Read the article

  • Howto play video with external audio in Silverlight?

    - by Fury
    Hi all, Is there any proper method to play synchronously video and external audio, other than simply having two MediaElement (one for video source and one for audio) started simultaneously? I need to play video with different soundtracks, but I belive that just two separated MediaElements will be out of sync at some point of time. Maybe there is some way to add audio source to the existing MediaElement with video? Platform: SL3, but SL4 will be good as well. Thanks in advance.

    Read the article

  • How to initialize audio with Vala/SDL

    - by ioev
    I've been trying to figure this out for a few hours now. In order to start up the audio, I need to create an SDL.AudioSpec object and pass it to SDL.Audio.Open. The problem is, AudioSpec is a class with a private constructor, so when I try to create one I get: sdl.vala:18.25-18.43: error: `SDL.AudioSpec' does not have a default constructor AudioSpec audiospec = new SDL.AudioSpec(); ^^^^^^^^^^^^^^^^^^^ And if I try to just assign values to it's member vars like a struct (it's a struct in normal sdl) I get: sdl.vala:20.3-20.25: error: use of possibly unassigned local variable `audiospec' audiospec.freq = 22050; ^^^^^^^^^^^^^^^^^^^^^^^ I found the valac doc here: http://valadoc.org/sdl/SDL.AudioSpec.html But it isn't much help at all. The offending code block looks like this: // setup the audio configuration AudioSpec audiospec; AudioSpec specback; audiospec.freq = 22050; audiospec.format = SDL.AudioFormat.S16LSB; audiospec.channels = 2; audiospec.samples = 512; // try to initialize sound with these values if (SDL.Audio.open(audiospec, specback) < 0) { stdout.printf("ERROR! Check audio settings!\n"); return 1; } Any help would be greatly appreciated!

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • Redirection of xml file for iphone app

    - by gotye
    Hey guys, I have a xml file on my server like the one below : www.myWebSite.com/myXmlFile.xml which is used by my iphone application. In case the address of my xml file changes to www.myOtherWebsite.com/myXmlFile.xml How can I make my app to work anyway ? What kind of PHP server-side code do I need to write ? Is NSURLConnection supporting reirections ? Thanks for any incomings ;) Gotye.

    Read the article

  • Xcode best audio practices

    - by Zachary Webert
    What is the best practice for creating an audio prompt within my app, which will append different portions of audio together to ask a question? ex. "What is" + "foo"? "What is" + "bar"? I have developed a "AudioQueue" object using audiotool box which uses AudioServicesPlaySystemSound() and it is working perfectly. Is there anything wrong with playing this type of audio through the alert system. If so what are my alternatives?? Thank you

    Read the article

  • Routing redirection decision

    - by programming late night
    I have really no idea why I'm asking this as this a really completely irrelevant question for which I should have figured out an answer within milliseconds, yet I'm doing it. So in my project I have a Router class which splits up the request and selects the right page to be loaded. Fine so far. Now I have a page displayed when the user requests a page that doesn't exist, you know, 404. So theoretically, if the user entered mydomain.com/404 (I use mod_rewrite with a requests collector via index.php?req=*) the 404 error would be shown to him, but in fact there was no error - the 404 page would be displayed as a perfectly normal page. So if someone would try out requesting the 404 page via /404, he would be shown the page but he can't tell if the 404 page he requested doesn't exist and he is actually getting a, you guessed it, 404 error or if he actually found some flaw in the system that makes him able to see an error page when there is no error. I don't know how dumb this whole thing here is but I'm sure some of you have in fact ran into this problem already. Short version: If the user enters mydomain.com/404 the 404 page is shown even though there is no 404 error. I know this is a completely irrelevant question, please don't tell me, but I just spontaneously wanted to hear your thoughts on it. Strange eh? Should I redirect direct access to my 404-page to the home page? Should I do nothing? Should I just go to bed and stop asking irrelevant stuff?

    Read the article

  • HTML5 audio with PHP script does not work on iPad/Iphone

    - by saulob
    Ok, I'm trying to play an HTML audio code on iPad but does not work. I created one PHP script to send to the MP3 request to the HTML5 audio code mp3_file_player.php?n=mp3file.mp3 The player is here: http://www.avault.com/news/podcast-news/john-romero-podcast-episode-80/ You will see that works on every HTML5 supported browser even on my iPod Touch. But does not work on iPad/iPhone, even on Safari on Mac OSX (I tried on Safari/Windows, worked fine) This is my PHP code: header("X-Powered-By: "); header("Accept-Ranges: bytes"); header("Content-Length: ". (string)(filesize($episode_filename)) .""); header("Content-type: audio/mpeg"); readfile($episode_filename); exit(); Everything works fine, the MP3 has the same headers like reading the mp3 directly. HTTP Headers from direct file access: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:31 GMT Server Apache/2.2.9 Last-Modified Wed, 26 May 2010 13:39:19 GMT Etag "dac0039-41d91f8-4877f669cefc0" Accept-Ranges bytes Content-Length 50656162 Content-Range bytes 18390614-69046775/69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg HTTP Header from my PHP script: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:08 GMT Server Apache/2.2.9 Accept-Ranges bytes Content-Length 69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg The only thing different it's the Content-Range, I even tried to add it, but if I use it the player will not work on my Ipod Touch. So I removed. Thank you very much.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >