Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 35/213 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Experiences using VLC for video-on-demand streaming? (VLM)

    - by StackedCrooked
    I'm considering my options for implementing a VOD service. Until recently my choices seemed to be either Wowza or Darwin, but now I discovered VLM, which looks really cool. I am going to stream MPEG4 H.264 video with AAC audio. I'm probably going to use the RTSP protocol, but I'm willing to use HTTP as well (after reading this article). Can anyone comment on his or her experiences with VLM? How does it compare to Darwin or Wowza? Is it stable and worthy of production use? Are there any limitations or performance problems?

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • what's the "best" approach to creating the UI of an audio plugin that will be both audio unit and VST for OS X and Windows?

    - by SaldaVonSchwartz
    I'm working on a couple audio plugins. Right now, they are audio units. And while the "DSP" code won't change for the most part between implementations / ports, I'm not sure how to go about the GUI. For instance, I was looking at the Apple-supplied AUs in Lion. Does anyone know how did they go about the UI? Like, are the knobs and controls just subclasses of Cocoa controls? are they using some separate framework or coding these knobs and such from scratch? And then, the plugs I'm working on are going to be available too as VSTs for Windows. I already have them up and running with generic interfaces. But I'm wondering if I should just get over it and recreate all my interfaces with the vstgui code provided by Steinberg or if there's a more practical approach to making the interfaces cross-platform.

    Read the article

  • audio frameworks in iPhone

    - by suse
    Hello, I would like to know the follwing information about iPhone audio system Heirarchy of the audio framework in iPhone OS. i know that there are 3 main audio frameworks in iPhone OS.i.e AVFoundation Framework CoreAudio Framework OpenAL Framework what are the audio formats supported in each of the above framework?I mean will all the framework support all audio formats or are they dependent about the audio formats it support? Thank You

    Read the article

  • Streaming Media with Sony Blu-ray Disc Player

    - by Ben Griswold
    The best gift under the tree this year? A Sony Blu-ray Disc player: The BDP-N460 allows you to instantly stream thousands of movies, videos and music from the largest selection of leading content providers including Netflix, Amazon Video On Demand, YouTube™, Slacker® Radio and many, many more. Plus, enjoy the ultimate in high-definition entertainment and watch Blu-ray Disc movies in Full HD 1080p quality with HD audio. The BDP-N460 includes built-in software that makes it easy to connect this player to your existing wireless network.  So I did… I paired the disc player with the recommended Linksys Wireless Ethernet Bridge (WET-610N) and I was streaming the last season of Lost episodes in no time. Really cool. Highly recommended.

    Read the article

  • AAC Sample Rate and Bit Rate for High Quality Audio?

    - by marco.ragogna
    What are the AAC Sample Rate and Bit Rate settings to set in order to encode an audio track with a quality comparable to MP3 320kbps? I need to backup a DVD movie, the default settings for AAC are Bitrate (KB/s) 128 Sample Rate (HZ) 44100 should I set Bitrate (KB/s) 320 Sample Rate (HZ) 48000 or the default are already good?

    Read the article

  • Why does the MaxReceiveMessageSize in WCF matter in case of Streaming

    The default value of MaxReceiveMessageSize in WCF is 65,536.  When you choose streaming as TransferMode, WCF runtime will create 8192 as buffer size. So what happened now is that WCF channel stack will read the first 8192 bytes, and decode the first couple of bytes as the size of the entire envelope. Then we will do a size check, and send back fault if the actual size exceeds the limit.   According to MSDN documentation, the MaxReceiveMessageSize is something that prevents a DOS attack,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Linux program to convert audio file of fax transmission to image?

    - by bdk
    I have a number of uncompressed audio files recorded off of an analog (POTS) telephone line of fax transmissions. Is there a Linux utility or library I could use to convert these files into images of the fax they contain? I'm not looking to send/receive a fax via a modem, but just to "replay" the communications tones and parse out the fax message.I'm guessing this may not be possible due to duplex issues and not knowing which end of the conversation is sending what,but thought I'd ask to see if anyone knew of something.

    Read the article

  • How can I split the 5.1 audio channels from an AC3 file into individuals streams (preferably on a Ma

    - by Drarok
    I have a file that I've pulled from a DVD that is apparently in AC3 5.1 format. The extension is .AC3 and it opens an plays in QuickTime, VLC etc. What I want is each individual channel in a separate file, but I can't seem to find any tools that will allow be to do that. Is there a way to split the file I have, or alternatively is there a way to pull the audio streams from a 5.1 DVD?

    Read the article

  • Why do my 3GP videos not play audio using VLC?

    - by GiH
    I have a Nexus One phone and when I record video the container seems to be 3GP. When I try to playback using VLC I'm getting no audio, and an error saying that there is nothing I can do because VLC does not support the "samr" codec. Is there really nothing I can do to watch my videos on VLC? If not, whats the alternative? I really like VLC specifically because I never have to download codecs...

    Read the article

  • Unity Player Controls Streaming Music Services From Chrome Toolbar

    - by Jason Fitzpatrick
    Chrome: If you’re a frequent Pandora, Grooveshark, or other popular streaming music station listener, Unity Player puts play control and song info on the Chrome Toolbar. Rather than sending you digging through your tabs to find the window with Pandora–or Google Music, Grooveshark, 8Tracks, Hypemachine, or any of the other dozen supported services–Unity Player pulls up a one-click control panel for easy pause/play, skip, and access to other service features like thumbs up/down flagging. Unity Player is free, works wherever Chrome does. Unity Player [via Addictive Tips] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • help bonding streaming rtp 3g

    - by enrique
    first sorry for contact me here. Recuro to you after reading all the material I found about it and so it does not get set. My question is: I can configure load balancing in any way out? I have a hub with 3 USB 3G modems, I got the 3 simultaneously connect with an upload speed of about 500kb in each approx. and a dynamic ip each. I do a unicast streaming with vlc rtp with a bandwidth of 1.5mb. Bone the sum of the three modems. I was searching on ifenslave, iproute. Then I found a draft vlc MultiCat. I understood that this could end, but configure it only moves a card. If I can help extend the information willingly. From now eternally grateful.

    Read the article

  • Why is preserving the pitch in audio playback (allegedly) less performant?

    - by Markus Unterwaditzer
    In VLC for Android, i discovered an option to preserve the pitch during faster-than-normal playback: The "requires a fast device" obviously implies that faster playback is more performant when the pitch is changed too. Why is that so? What i've tried: Before posting this question i did some shallow research through Google. According to Wikipedia, there are several methods for faster playback of audio, the "simplest" one (Resampling) changes the pitch.

    Read the article

  • WAV and MP3 Streaming with ASP.Net and C#

    In this programming tutorial you will learn how to stream WAV and MP3 audio files in ASP.NET 3.5 using the C# server side language. This is particularly useful for music websites that are based on the ASP.NET 3.5 platform. The examples used in this article are tested to work on any major browser including Internet Explorer Chrome and Firefox. The scripts are tested on a Windows XP operating system using Visual Web Developer Express. For convenience an actual working example can be downloaded at the end of this tutorial. Finally this tutorial also highlights the use of the Google Flash player when streaming MP3s.... Autodesk? Inventor? Test Drive Autodesk? Inventor?. Download A Free 30-Day Trial Today.

    Read the article

  • How much audio latency is noticeable by human brain?

    - by Borek
    I am choosing a wireless headset for my PC (I hate cables) and am looking at Sennheiser RS 170 / 180. They supposedly sound great, however, there is a 25ms audio latency. I've heard that this is OK when watching TV or listening musing but is bad for games. The question is - has there been any research / hard data that would show how much of a delay is noticeable by human brain? 25ms doesn't sound like a lot but I may be wrong.

    Read the article

  • ATI HDMI Audio disappears in Windows 7 when a TV is connected

    - by jsalonen
    So far I have unsuccessfully googled for HOURS with no luck fixing this very annoying problem. The settings is the following: I have PC running Windows 7 RC (64-bit) Video card is a ATI Radeon 4850 series card (Sapphire HD 4850 512MB to be exact) The video card has HDMI out with built-in audio chip I have an HDMI cable connecting the PC to a TV (Sony Bravia series) The problem is that when I connect the HDMI cable to the TV, the ATI HDMI Sound output device disappears completely from the list of playback devices in Windows. As a workaround I can restore the audio by re-installing the HDMI audio driver. However, when I disconnect the TV the driver disappears again. So basically, every time I want to watch stuff on my TV, I have to reinstall audio driver, which of course is VERY annoying. EDIT: I have figured out that I do not need to re-istall the HDMI audio driver to restore sound; I only need to reboot my computer with the HDMI cable plugged in to restore the audio driver. This suggests that the problem has something to do with information passed from TV to computer, which makes my HDMI Audio driver disappear. Are there any other, more elegant workarounds for this problem? All help is much appreciated!

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Audio queue start failed

    - by mobapps99
    Hi , i'm developing a project which has both audio streaming and playing audio from file. For audio streaming i'm using AudioStreamer and for playing from file i'm using avaudioplayer. Both streaming and playing works perfectly as long as the app is not interrupted by a phone call or sms. But when a call/sms comes after dismissing the call when i try to restart streaming i'm getting the error "Audio queue start failed" . This happens only when i have used avaudioplayer at least once and after that used streaming. When the avaudioplayer obeject is not created , in this scenario the there is no problem with resuming streaming after dismissing the call. My guess is that some thing is wrong with audioqueue. Help is very much appreciated.......

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

  • Upscaling audio from 2.1 to 5.1 in Windows 7

    - by Darth Android
    I'm currently using the onboard sound on my Asus P6T6 WS Revolution motherboard (SoundMAX Integrated Digital Audio) and was wondering if there was any way to make either windows or the audio drivers upscale 2-channel audio to 5-channel audio (basic duplication would suffice)? I was using a creative sound card but got fed up with the memory leaks and poor sound quality.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >