Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 67/213 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Question on ExtAudioFileRead and AudioBuffer for iPhone SDK

    - by backspacer
    I'm developing an iPhone app that uses the Extended Audio File Services. I try to use ExtAudioFileRead to read the audio file, and store the data in an AudioBufferList structure. AudioBufferList is defined as: struct AudioBufferList { UInt32 mNumberBuffers; AudioBuffer mBuffers[1]; }; typedef struct AudioBufferList AudioBufferList; and AudioBuffer is defined as struct AudioBuffer { UInt32 mNumberChannels; UInt32 mDataByteSize; void* mData; }; typedef struct AudioBuffer AudioBuffer; I want to manipulate the mData but I wonder what does the void* mean. Why is it void*? How can I decide what data type is actually stored in mData?

    Read the article

  • Streaming multiple TObjects to a TMemoryStream

    - by Altar
    I need to store multiple objects (most of them are TObject/non persistent) to a TMemoryStream, save the stream to disk and load it back. The objects need to be streamed one after each other. Some kind of universal container. I started to make several functions that store stuff to stream - for example function StreamReadString(CONST MemStream: TMemoryStream): string; StreamWriteString(CONST MemStream: TMemoryStream; s: string); However, it seems that I need to rewrite a lot of code. One of the many examples is TStringList.LoadFromStream that will not work so it needs to be rewritten. This is because TStringList needs to be the last object in the stream (it reads from current position to the end of the stream). Anybody knows a library that provide basic functionality like this? I am using Delphi 7 so RTTI won't help too much.

    Read the article

  • sound loop breaks after some time in background music in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • Can FLV AAC stream be played in Android

    - by HariKJ
    Hi, I'm trying to build a radio player and the client is providing a stream which is a FLV container with the audio being AAC When I read the headers it shows up as audio/aacp. I have tried all possible ways such as using the 1) Streaming through mediaplayer (Does not work) 2) Use the NPR mode of using a proxy stream (I get a broken pipe exception) 3) Play it in chunks ( Plays but I need the SDCard and the playback is not very great) 4) Use the GPL'd FAAD2 Library but I would have to pay the royalty fee Can some one help me out on figuring this issue out. The last option that I have is to have my client change the stream to mp3 container (which I know that it works) Regards, Hari

    Read the article

  • Replacement for java.util.zip for streaming usage?

    - by evilfred
    java.util.zip sucks for stream compression. The longer you leave an Inflator/Deflator open without calling end(), the more native memory it uses up. This is a known issue: http://bugs.sun.com/view_bug.do?bug_id=4797189 which nobody seems to care about fixing. What is a good alternative? Preferably one that is free and is still actively supported by its developers.

    Read the article

  • Rails streaming file download

    - by Leonard Teo
    I'm trying to implement a file download with Rails. I want to eventually migrate this code to using S3 to serve the file. I've copied the Rails send_file code almost verbatim and I cannot seem to get it to stream a file to the user. What happens is that it sends 'a' file to the user, but the downloaded file itself simply contains the text.inspect of the Proc: # What am I doing wrong here? options = {} options[:length] = File.size(file.path) options[:filename] = File.basename(file.path) send_file_headers! options render :status => 200, :text => Proc.new { |response, output| len = 4096 File.open(file.path, 'rb') do |fh| while buf = fh.read(len) output.write(buf) end end } Ps: I've read in a number of other posts that it's not advisable to send files through the Rails stack, and if possible serve using the web server, or in the case of S3 use the hashed URL it can provide. Yes, we really do want to serve the file through the Rails stack.

    Read the article

  • Local Live Quicktime Video Broadcast, latency?

    - by Snowwire
    I'm looking into the feasibility of using a local server to distribute live video of a conference to delegates in the same room. They would still hear the live audio coming from the speaker, so only the video would be streamed. I was considering a Darwin Steaming Server (a lot of iPhone users to support) and encoding with H.264. My main concern is latency across the network. Even with everything running locally, would there be lip sync issues between the live audio and the 'live' video stream? It feels like there will be problems given the encoding, broadcasting, decoding to be completed, but I've never done any like this before so thought I would check. Thanks

    Read the article

  • Event Source Live streaming in Ruby on rails onError method

    - by kishorebjv
    I'm trying to implement basic rails4 code with eventsource API & Action controller live, Everything is fine but I'm not able to reach event listner . Controller code: class HomeController < ApplicationController include ActionController::Live def tester response.headers["Content-Type"] = "text/event-stream" 3.times do |n| response.stream.write "message: hellow world! \n\n" sleep 2 end end Js code: var evSource = new EventSource("/home/tester"); evSource.onopen = function (e) { console.log("OPEN state \n"+e.data); }; evSource.addEventListener('message',function(e){ console.log("EventListener .. code.."); },false); evSource.onerror = function (e) { console.log("Error State \n\n"+e.data); }; & When i reloading the page, My console output was "OPEN state" & then "Error State" as output.. event-listener code was not displaying . 1.When I'm curling the page, "message: Hellow world!" was displaying. 2.I changed in development.rb config.cache_classes = true config.eager_load = true 3. My browsers are chrome & firefox are latest versions, so no issues with them, Where I'm missing? suggestions please!

    Read the article

  • Apple HTML5 progressive streaming

    - by seanc
    Is there a copy of the file saved to the ipad/ipod when using progressive http download (apple segmented) that the user could potentially copy off the device? meaning, is this scenario considered 'safe' from a content protection pov: 1. user comes to websites with movies behind paywall 2. user selects a movie to watch online (progressively downloaded or 'pseudo-streamed') 3. segments are downloaded seamlessly by html5 video tag as part of .m3u8 4. movie plays/ends. is there any full copy of this movie on the device that can be transferred off? anyone know apple's stance on this?

    Read the article

  • twitter streaming api instead of search api

    - by user1711576
    I am using twitters search API to view all the tweets that use a particular hashtag I want to view. However, I want to use the stream function, so, I only get recent ones, and so, I can then store them. <?php global $total, $hashtag; $hashtag = $_POST['hash']; $total = 0; function getTweets($hash_tag, $page) { global $total, $hashtag; $url = 'http://search.twitter.com/search.json?q='.urlencode($hash_tag).'&'; $url .= 'page='.$page; $ch = curl_init($url); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $json = curl_exec ($ch); curl_close ($ch); echo "<pre>"; $json_decode = json_decode($json); print_r($json_decode->results); $json_decode = json_decode($json); $total += count($json_decode->results); if($json_decode->next_page){ $temp = explode("&",$json_decode->next_page); $p = explode("=",$temp[0]); getTweets($hashtag,$p[1]); } } getTweets($hashtag,1); echo $total; ?> The above code is what I have been using to search for the tweets I want. What do I need to do to change it so I can stream the tweets instead? I know I would have to use the stream url https://api.twitter.com/1.1/search/tweets.json , but what do I need to change after that is where I don't know what to do. Obviously, I know I'll need to write the database sql but I want to just capture the stream first and view it. How would I do this? Is the code I have been using not any good for just capturing the stream?

    Read the article

  • Starting a process synchronously, and "streaming" the output

    - by Benjol
    I'm looking at trying to start a process from F#, wait till it's finished, but also read it's output progressively. Is this the right/best way to do it? (In my case I'm trying to execute git commands, but that is tangential to the question) let gitexecute (logger:string->unit) cmd = let procStartInfo = new ProcessStartInfo(@"C:\Program Files\Git\bin\git.exe", cmd) // Redirect to the Process.StandardOutput StreamReader. procStartInfo.RedirectStandardOutput <- true procStartInfo.UseShellExecute <- false; // Do not create the black window. procStartInfo.CreateNoWindow <- true; // Create a process, assign its ProcessStartInfo and start it let proc = new Process(); proc.StartInfo <- procStartInfo; proc.Start() |> ignore // Get the output into a string while not proc.StandardOutput.EndOfStream do proc.StandardOutput.ReadLine() |> logger What I don't understand is how the proc.Start() can return a boolean and also be asynchronous enough for me to get the output out of the while progressively. Unfortunately, I don't currently have a large enough repository - or slow enough machine, to be able to tell what order things are happening in...

    Read the article

  • background music stops after 3/4 runs in iphone app

    - by amy
    I am playing sounds in loop in my app. So it should continue playing through out the app. but sometimes it stops after playing sound for 3/4 times.I don't understand whats happening. I am using audio-toolbox framework for playing sound. creating audio queue and then playing sounds in loop. I am also playing sound from ipod library using mediaplayer. Same thing happening with song from ipod. I have set [musicPlayer setRepeatMode: MPMusicRepeatModeOne]; but still it stops after 3/4 times.

    Read the article

  • Python: Streaming Input with Subprocesses

    - by beary605
    Since input and raw_input() stop the program from running anymore, I want to use a subprocess to run this program... while True: print raw_input() and get its output. This is what I have as my reading program: import subprocess process = subprocess.Popen('python subinput.py', stdout=subprocess.PIPE, stderr=subprocess.PIPE) while True: output=process.stdout.read(12) if output=='' and process.poll()!=None: break if output!='': sys.stdout.write(output) sys.stdout.flush() When I run this, the subprocess exits almost as fast as it started. How can I fix this?

    Read the article

  • Voices disappear when using headphones. [closed]

    - by James
    How do I declare a variable in C? P.S. I have a pair of SteelSeries Siberia headphones. I've noticed that when watching some films the voices are completely silent, yet when I unplug the headset and listen through my speakers they are there and sound normal. I have no other software that could be interfering with it and it happens regardless of the software I use for playback (I've tried VLC, WMP and Quicktime). It is so strange, and it almost sounds deliberate - the rest of the audio is untouched but voices disappear. The films only have single audio tracks, and it doesn't happen with every film. Can anyone give me any hints as to what could possibly cause this? I am stumped!

    Read the article

  • Ruby: Streaming large AWS S3 object freezes

    - by Peter
    Hi, I am using the ruby aws/s3 library to retrieve files from Amazon S3. I stream an object and write it to file as per the documentation (with debug every 100 chunks to confirm progress) This works for small files, but randomly freezes downloading large (150MB) files on VPS Ubuntu. Fetching the same files (150MB) from my mac on a much slower connection works just fine. When it hangs there is no error thrown and the last line of debug output is the 'Finished chunk'. I've seen it write between 100 and 10,000 chunks before freezing. Anyone come across this or have ideas on what the cause might be? Thanks The code that hangs: i=1 open(local_file, 'w') do |f| AWS::S3::S3Object.value(key, @s3_bucket) do |chunk| puts("Writing chunk #{i}") f.write chunk.read_body puts("Finished chunk #{i}") i=i+1 end end

    Read the article

  • AudioOutputUnitStart takes time

    - by tokentoken
    Hello, I'm making an iPhone game application using Core Audio, Extended Audio File Services. It works OK, but when I first call AudioOutputUnitStart, it takes about 1-2 seconds. After the second call, no problem. For a game application, 1-2 seconds is very noticeable. (I tested this on iPhone simulator, and iPhone 3GS) Also, if I leave the game for about 10 seconds, first call of AudioOutputUnitStart also takes time. Maybe I have to call AudioOutputUnitStart beginning of the application to prevent the start-up time?

    Read the article

  • BufferedInputStream.read(byte[]) Causes problems. Anyone have this problem before?

    - by K-RAN
    Hello! I've written a Java program that downloads audio files for me and I'm using BufferedInputStream. The read() function works fine but is really slow so I've tried using the overloaded version using byte[]. For some reason, the audio becomes lossy and strange after download. I'm not totally sure what I'm doing wrong so any help is appreciated! Here's the simplified, sloppy version of the code. BufferedInputStream bin = new BufferedInputStream((new URL(url)).openConnection().getInputStream()); File file = new File(fileName); FileOutputStream fop = new FileOutputStream(file); int rd = bin.read(); while(rd != -1) { fop.write(rd); rd = bin.read(); }

    Read the article

  • iPad MPMoviePlayerController only hearing audio, no videos!

    - by Steph Moreau
    I am currently rebuilding my app for the iPad. I would like to play the videos sourced online. I display the information and when i go to play the video all i get is the audio... No video is shown at all. My page looks exactly the same except that i have some "background" noise. These are the same videos i use on the iPhone app and they work perfectly This is the code that i call to play my videos - (IBAction) playMovie{ NSURL *url = [NSURL URLWithString:vidMovie]; MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc]initWithContentURL:url]; [moviePlayer play]; } I am using this on a button on the right side view of a splitViewController. I get the same result in my simulator as on an iPad. Not sure if i'm missing something, but if anyone can help it would be greatly appreciated!

    Read the article

  • How to burn an Audio CD programmatically in Mac OS X

    - by Adion
    All the info I can find about burning cd's is for Windows, or about full programs to burn cd's. I would however like to be able to burn an Audio CD directly from within my program. I don't mind using Cocoa or Carbon, or if there are no API's available to do this directly, using a command-line program that can use a wav/aiff file as input would be a possibility too if it can be distributed with my application. Because it will be used to burn dj mixes to cd, it would also be great if it is possible to create different tracks without a gap between them.

    Read the article

  • Highlighting effect to text and/or image similar to be synchronized with audio

    - by Irfan Mulic
    I am looking how to approach following problem: We have application that displays text with audio recorded material. We use Browser Control (Internet Explorer) in Delphi App to do this. We respond to events in Delphi code setting innerHTML for elements if we have to update the style ... Now, request is to add option to dynamically move the cursor or dynamically highlight the words spoken from the paragraph. It doesn't need to match absolutely the exact word spoken so we will have to dynamically update the content of position of highlighted word based on some timer or something (because it is not text to speach). What should be the most practical and easy approach to this kind of problem, all answers are greatly appreciated. Thanks.

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Problem with recording audio in Flash (Red5, ffmpeg)

    - by AT
    I'm trying to implement a small program with Flash and php that records audio and converts it to mp3. Currently I have Red5 server up and running, I can connect to it with no problems and I can publish flv recordings to the server. When I listen to the flv with Wimpy FLV player it seems to be fine. The problem comes when I'm trying to convert it with ffmpeg on the command line. I'm simply using a command ffmpeg -i but the output wav is about 50% slower than the input. When I record 10sec, the output is 15sec and pitched down. I've also tried all kinds of bitrate settings, -nv option, etc. but nothing seems to work. I have a recent version of ffmpeg that supports nellymoser format.. Don't know what to do. Anyone have any ideas?

    Read the article

  • Crossfading audio with PyQT4 and Phonon

    - by dwelch
    I'm trying to get audio files to crossfade with phonon. I'm using PyQT4. I have tracks queuing properly, but I'm stuck with the fade effect. I think I need to be using the KVolumeFader effect. Here's my current code: def music_play(self): self.delayedInit() self.m_media.setCurrentSource(Phonon.MediaSource(self.playlist[self.playlist_pos])) self.m_media.play() def music_stop(self): self.m_media.stop() def delayedInit(self): if not self.m_media: self.m_media = Phonon.MediaObject(self) audioOutput = Phonon.AudioOutput(Phonon.MusicCategory, self) Phonon.createPath(self.m_media, audioOutput) def enqueueNextSource(self): if len(self.playlist) >= self.playlist_pos+1: self.playlist_pos += 1 self.m_media.enqueue(Phonon.MediaSource(self.playlist[self.playlist_pos])) else: self.m_media.stop() Can anyone give me some advice on implementing the effect?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >