Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 17/213 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • MPlayer refuses to generate mono wav file

    - by JCCyC
    I want to downsample an existing audio file to 8KHz mono. This command line downsamples it to stereo: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1 -ao pcm:waveheader:file="/tmp/blah1.wav" ~/from_my_cellphone.3ga It generates a file that the file utility identifies as stereo: $ file /tmp/blah1.wav /tmp/blah1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 8000 Hz Now, if I read the documentation correctly, I should add pan=1:0.5:0.5 so I get a file that's half the size: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1:pan=1:0.5:0.5 -ao pcm:waveheader:file="/tmp/blah2.wav" ~/from_my_cellphone.3ga But it doesn't! blah2.wav is identical to blah1.wav! What am I doing wrong?

    Read the article

  • Need advice on how to set up live video streaming to web/mobile devices

    - by jasondewitt
    I have a bunch of live udp video streams that currently are viewed by set top boxes in my network. I would like to pick this video up (I can do this with vlc now) and stream it out to other non-STB endpoints (webpage or a phone/tablet of some sort). Right now I am able to pick up the udp stream with vlc and convert it to an http stream on port 8080 of my vlc box. Then I can use the vlc client to pick up and watch that video stream. This is where I'm not sure where to go with it. I really doubt I would want everyone who is watching the video to make a connection back to my vlc server that is doing the encoding, so how do I distribute this live video to the people who want to see it?

    Read the article

  • Video streaming over multi display units

    - by ramdaz
    We have to share video across around 4/8 terminals at a public facility where we need to display live video from within the facility, as well as display messages(advertisements), and also play videos(not live) which need to be controlled centrally from another location. We can do central location handling over Internet, over ssh. What we want to do is connect cameras to a computer, and use the computer to display over multiple display units. We need to do live titling if possible. Once the live local telecast which usually takes about an hour or two a day, we would like to play other videos locally off the PC server. Preferably everything should run off Linux, since budgets are very constrained.... Addendum -- Its not over WAN, it's over a local area. I prefer not using LAN, we would rather use co-axial cable if possible. The reason is if it's LAN, I need some kind of an Networking device, at least a thin client

    Read the article

  • Streaming from a second computer

    - by techgod52
    I play games on my laptop, and they run at about 30-45 fps, which is bearable for me. But when I try to stream, the frame rate drops to 20 or lower, which is unplayable for me. I have a second computer though (a Mac, the laptop is Win7), and I'm wondering if there is anyway to stream the game content (onto Twitch.tv) from my laptop using my Mac. Is this possible, and how would I go about doing it?

    Read the article

  • Reading audio with Extended Audio File Services (ExtAudioFileRead)

    - by Paperflyer
    I am working on understanding Core Audio, or rather: Extended Audio File Services Here, I want to use ExtAudioFileRead() to read some audio data from a file. This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer). As soon as I use more than one AudioBuffer, ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the audioBufferList. I can not use one huge buffer because then, dataByteSize would overflow its UInt32-integer range with huge files. Here is the code to create the audioBufferList: AudioBufferList *audioBufferList; audioBufferList = malloc(sizeof(AudioBufferList) + (numBuffers-1)*sizeof(AudioBuffer)); audioBufferList->mNumberBuffers = numBuffers; for (int bufferIdx = 0; bufferIdx<numBuffers; bufferIdx++ ) { audioBufferList->mBuffers[bufferIdx].mNumberChannels = numChannels; audioBufferList->mBuffers[bufferIdx].mDataByteSize = dataByteSize; audioBufferList->mBuffers[bufferIdx].mData = malloc(dataByteSize); } UInt32 numFrames = fileLengthInFrames; error = ExtAudioFileRead(extAudioFileRef, &numFrames, audioBufferList); Do you know what I am doing wrong here?

    Read the article

  • How to use SQL file streaming win32 API and support WCF streaming

    - by Mahesh
    I'm using Sql server file stream type to store large files in the backend. I'm trying to use WCf to stream the file across to the clients. I'm able to get the handle to the file using SQLFileStream (API). I then try to return this stream. I have implemenetd data chunking on the client side to retrive the data from the stream. I'm able to do it for regular filestream and memory stream. Also if i convert then sqlfilestream in to memorystream that also works. The only think that doesn't work is when I try to return sqlfilestream. What am I doing wrong. I have tried both nettcpbinding with streaming enabled and http binding with MTOM encoding. This is the error message am getting : Socket connection was aborted. This could be caused by an error processing your mesage or a receive timeout being exceeded by the remote host, or an underlying network issue.. Local socket timneout was 00:09:59.... Here is my sample code RemoteFileInfo info = new RemoteFileInfo(); info.FileName = "SampleXMLFileService.xml"; string pathName = DataAccess.GetDataSnapshotPath("DataSnapshot1"); SqlConnection connection = DataAccess.GetConnection(); SqlTransaction sqlTransaction = connection.BeginTransaction("SQLSileStreamingTrans"); SqlCommand command = new SqlCommand(); command.Connection = connection; command.Transaction = sqlTransaction; command.CommandText = "SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()"; byte[] transcationContext = command.ExecuteScalar() as byte[]; SqlFileStream stream = new SqlFileStream(pathName, transcationContext, FileAccess.Read); // byte[] bytes = new byte[stream.Length]; // stream.Read(bytes, 0, (int) stream.Length); // Stream reeturnStream = stream; // MemoryStream memoryStream = new MemoryStream(bytes); info.FileByteStream = stream; info.Length = info.FileByteStream.Length; connection.Close(); return info; [MessageContract] public class RemoteFileInfo : IDisposable { [MessageHeader(MustUnderstand = true)] public string FileName; [MessageHeader(MustUnderstand = true)] public long Length; [MessageBodyMember(Order = 1)] public System.IO.Stream FileByteStream; public void Dispose() { if (FileByteStream != null) { FileByteStream.Close(); FileByteStream = null; } } } ANy help is appreciated

    Read the article

  • iPhone Audio Queue Service sample units

    - by pion
    I am looking at Audio Queue Services document specifically on the following code: // Writing an audio queue buffer to disk AudioFileWritePackets ( // 1 pAqData->mAudioFile, // 2 false, // 3 inBuffer->mAudioDataByteSize, // 4 inPacketDesc, // 5 pAqData->mCurrentPacket, // 6 &inNumPackets, // 7 inBuffer->mAudioData // 8 ); inBuffer-mAudioDataByteSize is the number of bytes of audio data being written. inBuffer-mAudioData is the new audio data to write to the audio file. Assuming the sample rate is 44100. AudioStreamBasicDescription mDataFormat; mDataFormat.mSampleRate = 44100.0f; mDataFormat.mBitsPerChannel = 16; ... NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2; SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData; I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice. Beginner questions: What does the audioSample represent? What am I looking at here? What is the unit of audioSample? What do I need to do if I just want to plot the range between 50 - 100 Hz? Thanks in advance for your help.

    Read the article

  • Progressive download using Matt Gallagher's audio streamer

    - by Fernando Valente
    I'm a completely n00b when talking about audio. I'm using Matt Gallagher's audio streamer on my radio app. How may I use progressive download? Also, ExtAudioFile is a good idea too :) Edit: Used this: length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize); if(!data) data =[[NSMutableData alloc] initWithLength:0]; [data appendData:[NSData dataWithBytes:bytes length:kAQDefaultBufSize]]; Now I can save the audio data using writeToFile:atomically: NSData method, but the audio won't play. Also, if I try to load it on a AVAudioPlayer, I get an error.

    Read the article

  • audio processing in iPhone

    - by Janaka
    I am writing an iPhone application to apply filters to audio input and output the result in real time. I am new to audio processing but using audiounit, the correct approach? I found out how to output data using audiounit but couldn’t figure out how to capture input audio. Is there a sample application showing how to connect input and output using audiounit?

    Read the article

  • options for producing audio with GWT

    - by Kaffeine Coma
    What options are there for producing audio in a GWT app? I'm thinking of making a simple game, but I'm disappointed to see that there's still not much progress on audio support directly in GWT (yes, I realize that's largely due to lack of underlying browser support; looking forward to HTML5!) This blog post says that "audio support in GWT is rapidly evolving", yet I don't see updates in over a year, at least not at that site. It seems these are the available options: GWT Voices GWT SoundManager GWT Sound GWT Incubator I believe most of these (all of them?) rely on Flash to produce audio. I'm most inclined to go with the GWT Incubator, as that's where features slated for inclusion in GWT get started, but I've no real recommendations to go on. I would appreciate hearing about your experiences with any of these libraries, thanks.

    Read the article

  • html5 audio player - jquery toggle click play/pause???

    - by mathiregister
    hello guys, i wonder what i'm doing wrong? $('.player_audio').click(function() { if ($('.player_audio').paused == false) { $('.player_audio').pause(); alert('music paused'); } else { $('.player_audio').play(); alert('music playing'); } }); i can't seem to start the audio track if i hit the "player_audio" tag. <div class='thumb audio'><audio class='player_audio' src='$path/$value'></audio></div> any idea what i'm doing wrong or what i have to do to get it working?

    Read the article

  • Fake "user initiated" <audio> tag on iPad

    - by Alex Ford
    I know that Apple's docs say that an mp3 within an <audio tag on iPhone OS can't be played without user intervention (they cite bandwidth concerns, totally reasonable). However, has anyone succeeded in faking a user action to play the audio? Perhaps faking events to off screen native audio controls with JavaScript? I'm using jPlayer right now which works great on desktop Safari, but is silent on my iPad. I'm prototyping a touch interface using WebKit on the iPad, and audio is an integral part of the experience, so yes, I do have a good reason to want to override this convention. I'd appreciate any help. Thanks.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Audio processing in C# or C++

    - by melculetz
    Hi, I would like to create an application that uses AI techniques and allows the user to record a part of a song and then tries to find that song in a database of wav files. I would have liked to use some already existing libraries for the audio processing part. So, could you recommend any libraries in C# which can read a wav file, get input from microphone, have some audio filters (low pass, high pass, FFT etc) and maybe have the ability to plot the audio signal as well. I would prefer to develop in C#, but if there aren't good libraries for audio processing, I guess I could work in C++ as well. As far as I know, Mathlab already has the above mentioned functionalities, but I can't use it in my application.

    Read the article

  • Howto play video with external audio in Silverlight?

    - by Fury
    Hi all, Is there any proper method to play synchronously video and external audio, other than simply having two MediaElement (one for video source and one for audio) started simultaneously? I need to play video with different soundtracks, but I belive that just two separated MediaElements will be out of sync at some point of time. Maybe there is some way to add audio source to the existing MediaElement with video? Platform: SL3, but SL4 will be good as well. Thanks in advance.

    Read the article

  • How to initialize audio with Vala/SDL

    - by ioev
    I've been trying to figure this out for a few hours now. In order to start up the audio, I need to create an SDL.AudioSpec object and pass it to SDL.Audio.Open. The problem is, AudioSpec is a class with a private constructor, so when I try to create one I get: sdl.vala:18.25-18.43: error: `SDL.AudioSpec' does not have a default constructor AudioSpec audiospec = new SDL.AudioSpec(); ^^^^^^^^^^^^^^^^^^^ And if I try to just assign values to it's member vars like a struct (it's a struct in normal sdl) I get: sdl.vala:20.3-20.25: error: use of possibly unassigned local variable `audiospec' audiospec.freq = 22050; ^^^^^^^^^^^^^^^^^^^^^^^ I found the valac doc here: http://valadoc.org/sdl/SDL.AudioSpec.html But it isn't much help at all. The offending code block looks like this: // setup the audio configuration AudioSpec audiospec; AudioSpec specback; audiospec.freq = 22050; audiospec.format = SDL.AudioFormat.S16LSB; audiospec.channels = 2; audiospec.samples = 512; // try to initialize sound with these values if (SDL.Audio.open(audiospec, specback) < 0) { stdout.printf("ERROR! Check audio settings!\n"); return 1; } Any help would be greatly appreciated!

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • Xcode best audio practices

    - by Zachary Webert
    What is the best practice for creating an audio prompt within my app, which will append different portions of audio together to ask a question? ex. "What is" + "foo"? "What is" + "bar"? I have developed a "AudioQueue" object using audiotool box which uses AudioServicesPlaySystemSound() and it is working perfectly. Is there anything wrong with playing this type of audio through the alert system. If so what are my alternatives?? Thank you

    Read the article

  • HTML5 audio with PHP script does not work on iPad/Iphone

    - by saulob
    Ok, I'm trying to play an HTML audio code on iPad but does not work. I created one PHP script to send to the MP3 request to the HTML5 audio code mp3_file_player.php?n=mp3file.mp3 The player is here: http://www.avault.com/news/podcast-news/john-romero-podcast-episode-80/ You will see that works on every HTML5 supported browser even on my iPod Touch. But does not work on iPad/iPhone, even on Safari on Mac OSX (I tried on Safari/Windows, worked fine) This is my PHP code: header("X-Powered-By: "); header("Accept-Ranges: bytes"); header("Content-Length: ". (string)(filesize($episode_filename)) .""); header("Content-type: audio/mpeg"); readfile($episode_filename); exit(); Everything works fine, the MP3 has the same headers like reading the mp3 directly. HTTP Headers from direct file access: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:31 GMT Server Apache/2.2.9 Last-Modified Wed, 26 May 2010 13:39:19 GMT Etag "dac0039-41d91f8-4877f669cefc0" Accept-Ranges bytes Content-Length 50656162 Content-Range bytes 18390614-69046775/69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg HTTP Header from my PHP script: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:08 GMT Server Apache/2.2.9 Accept-Ranges bytes Content-Length 69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg The only thing different it's the Content-Range, I even tried to add it, but if I use it the player will not work on my Ipod Touch. So I removed. Thank you very much.

    Read the article

  • is background audio playing enabled in iPhone?

    - by Nareshkumar
    I was able to play the audio in background of the application in iPhone. However I would like to know if there is any service that enables the playback of audio after the user exits the application? I know that SDK 4.0 promises multitasking and background processes of the application. But i would like to know if this is enabled for audio playback in the earlier versions?

    Read the article

  • MVC Serve audio files while preventing direct linking using HttpResponseBase

    - by VinceGeek
    I need to be able to serve audio files to an mvc app while preventing direct access. Ideally the page would render with a player control so the user can start/stop the audio linked to the database record (audio files are in a folder not the db). I have a controller action like this: Response.Clear(); Response.ContentType = "audio/wav"; Response.TransmitFile(audioFilename); Response.End(); return Response; and the view uses the RenderAction method <% Html.RenderAction("ServeAudioFile"); %> this works but it won't display inline on the existing view, it opens a new page with just the media control. Am I totally barking up the wrong tree or is there a way to embed the response in the existing view? works exactly as I would like but I can't control access to the file.

    Read the article

  • Determining the best audio quality.

    - by The Rook
    How can you determine the best audio quality in a list of audio files, with out looking at the audio file's header. What if all of the files came from differnt encoding types and they where all transcoded to the same format and bit rate.

    Read the article

  • How are files (especially audio files) organized internally?

    - by mystify
    I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read. So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff. Is that correct? Does it apply to any kind of file? Is this how files actually look like?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >