Search Results

Search found 7178 results on 288 pages for 'audio playing'.

Page 2/288 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • USB Audio Device Loopback Through Speakers

    - by matto1990
    I have a USB turntable which when plugged in to my ubuntu 10.10 machine appears in the audio settings as an input device (USB PnP Audio Device Analog Stereo) like a microphone. What I'd like to be able to do it to have the sound for that audio device played back through the audio output (speaker or whatever). I'm not too worried if there's a slight delay between the audio coming in and it being played out through the speakers. As far as I'm aware this is refereed to as software loopback. I can achieve exactly what I want if I open Audacity, enable software loopback and press record. Obvious this isn't ideal as I don't really want it recording what I'm playing all the time. I know this is possible because of the Audacity example however I'd like to know if there's a way to do it without it recording. I've search around for a while for a piece of software that does this, however I couldn't get anything even close. Any help would be greatly appreciated.

    Read the article

  • Play audio in javascript with a good performance

    - by João
    I'm developing a browser game where the player can shoot. Everytime he shoots it play a sound. Currently i'm using this code to play sounds in JavaScript: var audio = document.createElement("audio"); audio.src = "my_sound.mp3"; audio.play(); I'm worried about performance here. Will 10 simultaneous sounds impact my game performance too much? Will all audio objects stay in memory even after they are played?

    Read the article

  • How can I select an audio output device in directshow

    - by Vibhore Tanwer
    I was wondering how I can select the output device for audio in directshow. I am able to get available audio output devices in directshow. But how can I make one of these to be audio output device. Its always going for the default audio device. I want to be able to output audio on my choice of device. I have been struggling through google but couldn't find anything useful. All I could get was this link but it doesn't really solve my problem. Any help will be really helpful for me.

    Read the article

  • Do you have any additions or alterations to this list of popular audio formats?

    - by roja
    All, I am trying to compile a list of common audio file formats used in both personal storage and peer transmission. I have compiled the following list, do you think that there are any significant formats missing? Are any of them not actually common formats? Any advice/alterations are highly useful. advanced audio coding, apple lossless audio file, atrac3 audio file, atrac audio file, audio interchange file format, core audio file, free lossless audio codec file, mpeg 1 audio layer 3, mpeg 2 audio, mpeg 4 audio book file, musical instrument digital interface, ogg vorbis compressed audio file, open media framework file, real audio, real audio media, waveform audio file format, windows media audio Kind regards, Roja

    Read the article

  • Boost Audio Input on OS X?

    - by alanstorm
    I'm using my 13" Mac Book Pro's audio input functionality with an external microphone (recent vintage, bought around Thanksgiving). I've increased my input volume to the maximum in system preference, but the resulting recorded volume (using iShowU HD) is very low. Is there anyway to increase the input volume/sensitivity beyond Apple's default settings? I've found plenty on google about increasing the OUTPUT volume, but I want to increase the input volume.

    Read the article

  • Set default system audio output port (for all accounts)

    - by Ludwik Trammer
    The default output audio port Ubuntu doesn't work on my system. It should be "Analog Mono Output/Amplifier", instead of "Analog Output/Amplifier". I can easily change that in sound preferences, just by choosing the right port in the "Output" tab. The problem is this would only apply to a single account, and I would like to change it system-wide, so it applies to all accounts on the system (I have more than 100 users...). I'm after 2 hours of Googling, so any help would be appreciated.

    Read the article

  • Playing a Song causing WP7 to crash on phone, but not on emulator

    - by Michael Zehnich
    Hi there, I am trying to implement a song into a game that begins playing and continually loops on Windows Phone 7 via XNA 4.0. On the emulator, this works fine, however when deployed to a phone, it simply gives a black screen before going back to the home screen. Here is the rogue code in question, and commenting this code out makes the app run fine on the phone: // in the constructor fields private Song song; // in the LoadContent() method song = Content.Load<Song>("song"); // in the Update() method if (MediaPlayer.GameHasControl && MediaPlayer.State != MediaState.Playing) { MediaPlayer.Play(song); } The song file itself is a 2:53 long, 2.28mb .wma file at 106kbps bitrate. Again this works perfectly on emulator but does not run at all on phone. Thanks for any help you can provide!

    Read the article

  • Playing part of a sfx audio file in HTML5 using WebAudio

    - by Matthew James Davis
    I have compiled all of my sound effects into one sequenced .ogg file. I have the start and stop times for each sound effect. How do I play the individual effects? That is, how do I play part of an audio file. More specificially, I've created a dictionary { 'sword_hit': { src: 'sfx.ogg', start: 265, // ms length: 212 // ms } } that my play_sound() function can use to look up 'sword_hit' and play the correct audio file at the correct start time for the correct duration. I simply need to know how to tell the WebAudio API to start playing at start ms and only play for length ms.

    Read the article

  • Monitoring an audio line.

    - by Stefan Liebenberg
    I need to monitor my audio line-in in linux, and in the event that audio is played, the sound must be recorded and saved to a file. Similiar to how motion monitors the video feed. Is it possible to do this with bash? something along the lines of: #!/bin/bash # audio device device=/dev/audio-line-in # below this threshold audio will not be recorded. noise_threshold=10 # folder where recordings are stored storage_folder=~/recordings # run indefenitly, until Ctrl-C is pressed while true; do # noise_level() represents a function to determine # the noise level from device if noise_level( $device ) > $noise_threshold; then # stream from device to file, can be encoded to mp3 later. cat $device > $storage_folder/`date`.raw fi; done;

    Read the article

  • PhP/HTML play button [migrated]

    - by Marian
    I'm wanting to make my own small webpage, I've got a domain Saoo.eu As you see there is a small play button in the corner witch plays a playlist. Is there anyway to have that playbutton on each page I'd add in the future without resetting every time the page changes? Am I forced to use iFrames for that? This is my player code <button id="audioControl" style="width:30px;height:25px;"></button> <audio id="aud" src="" autoplay autobuffer /> Script: $(document).ready(function() { $('#audioControl').html('II'); if(Modernizr.audio && Modernizr.audio.mp3) { audio.setAttribute("src",'http://daokun.webs.com/play0.mp3'); } else if(Modernizr.audio && Modernizr.audio.wav) { audio.setAttribute("src", 'http://daokun.webs.com/play0.ogg'); } }); var audio = document.getElementById('aud'), count = 0; $('#audioControl').toggle( function () { audio.pause(); $('#audioControl').html('>'); }, function () { audio.play(); $('#audioControl').html('II'); } ); audio.addEventListener("ended", function() { count++; if(count == 4){count = 0;} if(Modernizr.audio && Modernizr.audio.mp3) { audio.setAttribute("src",'http://daokun.webs.com/play'+count+'.mp3'); } else if(Modernizr.audio && Modernizr.audio.wav) { audio.setAttribute("src", 'http://daokun.webs.com/play'+count+'.ogg'); } audio.load(); });

    Read the article

  • Web Audio API and mobile browsers

    - by Michael
    I've run into a problem while implementing sound and music into an HTML game that I'm building. I'm using the Web Audio API, loading all the sound files with XMLHttpRequests and decoding them into an AudioBufferSourceNode with AudioContext.prototype.decodeAudioData(). It looks something like this: var request = new XMLHttpRequest(); request.open("GET", "soundfile.ogg", true); request.responseType = "arraybuffer"; request.onload = function() { context.decodeAudioData(request.response) } request.send(); Everything plays fine, but on mobile the decodeAudioData takes an absurdly long time for the background music. I then tried using AudioContext.prototype.createMediaElementSource() to load the music from an HTML Audio object, since they support streaming and don't have to load the whole file into memory at once. It looked something like this: var audio = new Audio('soundfile.ogg'); var source = context.createMediaElementSource(audio); var mainVolume = context.createGain(); source.connect(mainVolume); mainVolume.connect(context.destination); This loads much faster, but the audio volume isn't affected by the gain node. Works fine on desktop, so I'm assuming this is a bug/limitation of mobile Chrome (testing on Android). Is there actually no good, well-performing way to handle sound on mobile browsers or am just I doing something stupid?

    Read the article

  • Audio programming resources

    - by rashleighp
    I've been very interested in the last few months about getting in to audio programming (I'm from a musical background). I've been a .NET developer for two years and have also done some objective c for an iPhone app recently. I realise I would probably need to work on my C++ chops and have been having a play around with FMOD EX and doing a lot of research into the industry. I was just wondering if anyone could suggest some good resources for audio programming (be they websites, podcasts, books, videos, online courses etc). Anything from Fourier analysis, low level coding, audio engine creation to audio APIs. I just want to learn as much as possible! Thanks in advance.

    Read the article

  • Setting Up Audio on a Server Install

    - by tdcrenshaw
    I'm running on a clean install of 10.10 Server edition and have alsa-base, alsa-tools, alsa-utils, alsaplayer, and alsa-firmware-loader installed. At one point I installed pulseaudio, but I have since removed it. I've tried the following lspci | grep audio 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01) 01:06.0 Multimedia audio controller: Creative Labs [SB Live! Value] EMU10k1X aplay -l aplay: device_list:235: no soundcards found... alsamixer can not open mixer: No such file or directory When I search for modules with find /lib/modules/`uname -r` | grep snd I do get a list of modules I'm not very experienced with alsa setup, so I'm not sure where to go from here

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • Record audio via MediaRecorder

    - by Isuru Madusanka
    I am trying to record audio by MediaRecorder, and I get an error, I tried to change everything and nothing works. Last two hours I try to find the error, I used Log class too and I found out that error occurred when it call recorder.start() method. What could be the problem? public class AudioRecorderActivity extends Activity { MediaRecorder recorder; File audioFile = null; private static final String TAG = "AudioRecorderActivity"; private View startButton; private View stopButton; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); startButton = findViewById(R.id.start); stopButton = findViewById(R.id.stop); setContentView(R.layout.main); } public void startRecording(View view) throws IOException{ startButton.setEnabled(false); stopButton.setEnabled(true); File sampleDir = Environment.getExternalStorageDirectory(); try{ audioFile = File.createTempFile("sound", ".3gp", sampleDir); }catch(IOException e){ Toast.makeText(getApplicationContext(), "SD Card Access Error", Toast.LENGTH_LONG).show(); Log.e(TAG, "Sdcard access error"); return; } recorder = new MediaRecorder(); recorder.setAudioSource(MediaRecorder.AudioSource.MIC); recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder.setAudioEncodingBitRate(16); recorder.setAudioSamplingRate(44100); recorder.setOutputFile(audioFile.getAbsolutePath()); recorder.prepare(); recorder.start(); } public void stopRecording(View view){ startButton.setEnabled(true); stopButton.setEnabled(false); recorder.stop(); recorder.release(); addRecordingToMediaLibrary(); } protected void addRecordingToMediaLibrary(){ ContentValues values = new ContentValues(4); long current = System.currentTimeMillis(); values.put(MediaStore.Audio.Media.TITLE, "audio" + audioFile.getName()); values.put(MediaStore.Audio.Media.DATE_ADDED, (int)(current/1000)); values.put(MediaStore.Audio.Media.MIME_TYPE, "audio/3gpp"); values.put(MediaStore.Audio.Media.DATA, audioFile.getAbsolutePath()); ContentResolver contentResolver = getContentResolver(); Uri base = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI; Uri newUri = contentResolver.insert(base, values); sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, newUri)); Toast.makeText(this, "Added File" + newUri, Toast.LENGTH_LONG).show(); } } And here is the xml layout. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/RelativeLayout1" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <Button android:id="@+id/start" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="146dp" android:onClick="startRecording" android:text="Start Recording" /> <Button android:id="@+id/stop" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignLeft="@+id/start" android:layout_below="@+id/start" android:layout_marginTop="41dp" android:enabled="false" android:onClick="stopRecording" android:text="Stop Recording" /> </RelativeLayout> And I added permission to AndroidManifest file. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="in.isuru.audiorecorder" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:name=".AudioRecorderActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.RECORD_AUDIO" /> </manifest> I need to record high quality audio. Thanks!

    Read the article

  • How to fix these compiler errors?

    - by Sandra Schlichting
    I have this source code from 2001 that I would like to compile. It gives this: $ make g++ -O99 -Wall -DLINUX -pedantic -c -o audio.o audio.cpp In file included from audio.cpp:7: audio.h:14: error: use of enum ‘mad_flow’ without previous declaration audio.h:15: error: use of enum ‘mad_flow’ without previous declaration audio.h:17: error: use of enum ‘mad_flow’ without previous declaration audio.cpp: In function ‘mad_flow audio::input(void*, mad_stream*)’: audio.cpp:19: error: new declaration ‘mad_flow audio::input(void*, mad_stream*)’ audio.h:14: error: ambiguates old declaration ‘int audio::input(void*, mad_stream*)’ audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:23: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:23: error: within this context audio.h:10: error: ‘char* audio::stream::Buffer’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:26: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferPos’ is private audio.cpp:27: error: within this context audio.h:11: error: ‘size_t audio::stream::BufferSize’ is private audio.cpp:27: error: within this context audio.cpp: In function ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’: audio.cpp:49: error: new declaration ‘mad_flow audio::output(void*, const mad_header*, mad_pcm*)’ audio.h:15: error: ambiguates old declaration ‘int audio::output(void*, const mad_header*, mad_pcm*)’ audio.cpp: In function ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’: audio.cpp:83: error: new declaration ‘mad_flow audio::error(void*, mad_stream*, mad_frame*)’ audio.h:17: error: ambiguates old declaration ‘int audio::error(void*, mad_stream*, mad_frame*)’ audio.cpp: In constructor ‘audio::stream::stream(const char*)’: audio.cpp:119: error: ‘input’ was not declared in this scope audio.cpp:122: error: ‘output’ was not declared in this scope audio.cpp:123: error: ‘error’ was not declared in this scope make: *** [audio.o] Error 1 audio.h contains #include <stdlib.h> #include "mad.h" namespace audio { class stream { private: char* Buffer; size_t BufferSize, BufferPos; struct mad_decoder Decoder; friend enum mad_flow input(void* Data, struct mad_stream* MadStream); friend enum mad_flow output(void* Data, const struct mad_header* Header, struct mad_pcm* PCM); friend enum mad_flow error(void* Data, struct mad_stream* MadStream, struct mad_frame* Frame); public: stream(const char* FileName); ~stream(); void play(); }; } I have tried to just insert enum mad_flow {}; but that just gave a new problem. Can anyone see how to fix this?

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Is there an audio recording application/tool that has Tivo-like functionality?

    - by Bob
    I do a lot of live speech recording that requires me to quickly jump back and then transcribe a particular piece of the audio, then go back to recording again, while still maintaining the full audio file. So Far I've done this by splitting the audio and running one line to a recorder (for the whole audio), and one to my computer. Then I use something like Audacity to record, and then stop/go back whenever I hear something worth transcribing. This requires me to stop the recording, then start it up again and I end up missing chunks of the speech I'm listening to. Is there a tool that would let me rewind, then listen again and continue listening at a buffered distance from the audio recording, the way Tivo does with television shows?

    Read the article

  • HTML Audio performance

    - by user1888309
    I'm working on HTML drum machine, and I`ve met some performance issues, rhythm start to break if BPM is higher than 110 but I'm expecting to make it work on BPM over 180. I guess that it can be related with format or codec of audio files, however it also maybe that my code is not very optimised (as I can see from JS CPU profiling it's not). So I'm expecting you guys give me some code review or some hints on optimisation. Although all similar projects I've found on internet didn't work good and maybe it's just restrictions of Audio API. By the way, it's very raw and sounds works only on Chrome under Mac OS, so any advise on audio encoding for web also would be great Project on Github pages Screenshot of Groove which breaks UPDATE Ok, I've found that I was encoding audio files incorrectly, after fixing that rhythm stopped breaking, and also it started working in Mozilla. But still there are issues on windows OS.

    Read the article

  • Multiple audio sources on a single gameObject in unity

    - by angryInsomniac
    So, I have an audio system set up wherein I have loaded all my audio clips centrally and play them on demand by passing the requesting audioSource into the sound manager. However, there is a complication wherein if I want to overlay multiple looping sounds, I need to have multiple audio sources on an object, which is fine , so I created two in my script instantiated them and played my clips on them and then the world went crazy. For some reason, when I create two audio Sources in an object only the latest one is ever used, even if I explicitly keep objects separated, playing a clip on one or the other plays the clip on the last one that was created, furthermore, either this last one is not created in the right place or somehow messes with the rolloff rules because I can hear it all across my level, havign just one source works fine, but putting a second one on it causes shit to go batshit insane. Does anyone know the reason / solution for this ? Some pseudocode : guardSoundsSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardSoundsSource.name = "Guard_Sounds_source"; // Setup this source guardThrusterSource = (AudioSource)gameObject.AddComponent("AudioSource"); guardThrusterSource.name = "Guard_Thruster_Source"; // setup this source // play using custom Sound manager soundMan.soundMgr.playOnSource(guardSoundsSource,"Guard_Idle_loop" ,true,GameManager.Manager.PlayerType); // this method prints out the name of the source the sound was to be played on and it always shows "Guard_Thruster_Source" even on the "Guard_Idle_loop" even though I clearly told it to use "Guard_Sounds_source"

    Read the article

  • Play an audio file using RemoteIO and Audio Unit

    - by NeilMonday
    I am looking at Apple's 'aurioTouch' example for the iPhone and I would like to play an mp3 or wav instead of using the built in mic. I am very new to the audio portion of iPhone programming, but I think I need to modify the SetupRemoteIO(...) function and replace the AudioComponent named 'comp' with a custom AudioComponent that plays a file. Basically I want the app to function exactly the same as the original, but with an audio file as the input instead of the mic.

    Read the article

  • Audio decoding delay when changing the audio language

    - by mahendiran.b
    My gstreamer Pipeline is like this Approach1 --------------input-selector->Queue->AduioParser->AudioSink | Souphttpsrc->tsdemux-->| | --------------- Queue->videoParser->videoSink In this approach 1, there is a delay in audio decoding when I toggle between various audio language. Approach2 ------ input-selector-> Queue->AduioParser->AudioSink | Souphttpsrc->tsdemux---multiqueue>| | ------- Queue->videoParser->VideoSink But there is no delay is observed in approach2. Can anyone please explain the reason behind this ? what is the specialty of multiqueue here?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >