Search Results

Search found 4010 results on 161 pages for 'audio fingerprinting'.

Page 42/161 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Missing audio and problems playing FLV video converted from 720p .mov file with FFMPEG

    - by undefined
    I have some .mov video files recorded from a JVC GC-FM1 HD video camera in 720p mode. I have FFMPEG running on a Linux box that I upload files to and have them encoded into FLV format. The video appears to be encoding ok but there is no audio in the resulting FLV file and when I play it back in Flash Player in a browser or on Adobe Media Player, the video pauses at the start. It appears that Adobe Media Player waits for the progress bar to reach the end of the video before starting the playback - i.e. the video will load, the picture pauses, the progress bar seeks to the end as if the video was playing then when it reaches the end the video picture starts. There is no audio on the video. I am noticing this in the video player I have built with Flash 8 using an FLVPlayback component and attached seekBar. The seek bar will start moving as if the video is playing but the picture remains paused. Here are some outputs from my FFMPEG log and the command I am using to encode the video - my FFMPEG command called from PHP - $cmd = 'ffmpeg -i ' . $sourcelocation.$filename.".".$fileext . ' -ab 96k -b 700k -ar 44100 -s ' . $target['width'] . 'x' . $target['height'] . ' -ac 1 -acodec libfaac ' . $destlocation.$filename.$ext_trans .' 2>&1'; and here is the output from my error log - FFmpeg version UNKNOWN, Copyright (c) 2000-2010 Fabrice Bellard, et al. built on Jan 22 2010 11:31:03 with gcc 4.1.2 20070925 (Red Hat 4.1.2-33) configuration: --prefix=/usr --enable-static --enable-shared --enable-gpl --enable-nonfree --enable-postproc --enable-avfilter --enable-avfilter-lavf --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libvorbis --enable-libx264 libavutil 50. 7. 0 / 50. 7. 0 libavcodec 52.48. 0 / 52.48. 0 libavformat 52.47. 0 / 52.47. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.17. 0 / 1.17. 0 libswscale 0. 9. 0 / 0. 9. 0 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'uploads/video/60974_v1.mov': Metadata: major_brand : qt minor_version : 0 compatible_brands: qt comment : JVC GC-FM1 comment-eng : JVC GC-FM1 Duration: 00:00:30.41, start: 0.000000, bitrate: 4158 kb/s Stream #0.0(eng): Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 4017 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, rawvideo, to 'uploads/video/60974_v1.jpg': Stream #0.0(eng): Video: mjpeg, yuvj420p, 320x240 [PAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 90k tbn, 59.94 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding [h264 @ 0x8e67930]B picture before any references, skipping [h264 @ 0x8e67930]decode_slice_header error [h264 @ 0x8e67930]no frame! Error while decoding stream #0.0 [h264 @ 0x8e67930]B picture before any references, skipping [h264 @ 0x8e67930]decode_slice_header error [h264 @ 0x8e67930]no frame! Error while decoding stream #0.0 frame= 1 fps= 0 q=3.8 Lsize= 15kB time=0.02 bitrate=7271.4kbits/s dup=482 drop=0 video:15kB audio:0kB global headers:0kB muxing overhead 0.000000% Which are the important errors here - B picture before any references, skipping? decode_slice_header error? no frame? or Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) - 59.94 (60000/1001) Any advice welcome, thanks

    Read the article

  • Splitting HDMI sound to 2 devices under Windows 7

    - by Jeramy
    Okay, this is a strange set-up and is frustrating me. I have an HDMI signal from my PC being split to my audio receiver and my HDTV. I need to split it to both so that I can choose to either play audio from the HDTV or from the surround sound speakers in the room. The problem that I am having is in Windows 7, the output is listed under "Playback Devices" and is auto-populated with the HDTV, which only has the option for stereo sound. If I unplug the HDTV from the splitter it will populate with my receiver information and let me set it to 5.1 surround, but as soon as I plug the HDTV back in it reverts. I tried reversing the order of the HDMI cables in the splitter and this seemed to work for a short while, then Windows must have polled the devices again or something because it reverted. It will work as long as Windows identifies the reciever, thereby unlocking the 5.1 surround option, otherwise I am stuck with stereo, which it assumes is all the HDTV is capable of. Is there a way to manually override this and set my own options? Or any other solutions?

    Read the article

  • Diverting sound output of MCE to SPDIF

    - by Saxtus
    I have an ASUS SupremeFX II audio card (which in fact is an onboard audio riser slot) with the default drivers that are pre-installed by Windows 7 x64 for this card. I am able to manually switch between analog output and SPDIF output by the means of control panel (or external utilities like STADS), a change that affects all applications. The problem is that by doing that every time I am about to launch Windows Media Center, except that it's not that elegant, also makes all other Windows application's sounds to pass through SPDIF too, bypassing analog output completely, blending with what I am watching at Windows Media Center. Is there a way to make SPDIF as the default playback device for Windows Media Center? I know other programs that have a setting like that (foobar2000 for example) working like charm, allowing me to even have different outputs working at the same time (tested with my current card successfully). But when comes to Windows Media Center... it just use what the default playback device is all the time. The only setting that I know of, is under: Settings General Windows Media Center Setup Set Up Your Speakers and what it does is to just change the default playback device for entire system. Please help!

    Read the article

  • How to pipe internet radio into a tuner?

    - by JW
    UPDATE: Thanks everyone for the ideas! This was an area I knew very little about but now I can talk with a little more expertise about it. Much appreciated! Visited my dad this weekend and he wants to pipe some internet radio he's found down to a tuner on quite a distance away in the house. He uses computers for only very basic things: e-mail, getting the Post crossword, checking Yahoo!, checking recipes, etc. There's currently one computer in the house (no router). My initial suggestion (without any research whatsoever) was to get a wireless router and a netbook for downstairs near the tuner, but he initially wasn't too keen about having another computer down there. Anyway, is there any computer hardware that could magically pipe the audio output from the computer down to one set of (RCA) audio inputs on the tuner? Wireless isn't necessary but it probably would be easier. Anyway, thanks for your suggestions! UPDATE Thanks everyone! Voted up all of your suggestions now that I have 15 rep. Much appreciated.

    Read the article

  • How can I control which sound card Ubuntu uses for playback?

    - by GorillaSandwich
    I am dual-booting Ubuntu 9.04 and Windows XP but am new to Ubuntu. In Windows, I use an M-Audio Audiophile 2496 sound card for recording (because it has RCA input jacks for my mixer), but I don't use it for playback (because my speakers use a 1/8 inch jack); instead, I use the motherboard's built-in sound card. I tried to recreate this arrangement in Ubuntu, but despite selecting the built-in card for all playback under System > Preferences > Sound, I still have inconsistent results. Rhythmbox plays back through the integrated card, but Flash content in the browser and games in the OS send their audio to the Audiophile card. I have seen recommendations to use a program called "Jack" to control this, but I installed it and found it baffling. How can I control which card is used for playback, other than disabling one card (as I discovered how to do and explain below)? Also, is there a GUI for disabling hardware, or is it necessary to edit a configuration file?

    Read the article

  • How to send audio data from Java Applet to Rails controller

    - by cooldude
    Hi, I have to send the audio data in byte array obtain by recording from java applet at the client side to rails server at the controller in order to save. So, what encoding parameters at the applet side be used and in what form the audio data be converted like String or byte array so that rails correctly recieve data and then I can save that data at the rails in the file. As currently the audio file made by rails controller is not playing. It is the following ERROR : LAVF_header: av_open_input_stream() failed while playing with the mplayer. Here is the Java Code: package networksocket; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JApplet; import java.net.*; import java.io.*; import java.awt.event.*; import java.awt.*; import java.sql.*; import javax.swing.*; import javax.swing.border.*; import java.awt.*; import java.util.Properties; import javax.swing.plaf.basic.BasicSplitPaneUI.BasicHorizontalLayoutManager; import sun.awt.HorizBagLayout; import sun.awt.VerticalBagLayout; import sun.misc.BASE64Encoder; /** * * @author mukand */ public class Urlconnection extends JApplet implements ActionListener { /** * Initialization method that will be called after the applet is loaded * into the browser. */ public BufferedInputStream in; public BufferedOutputStream out; public String line; public FileOutputStream file; public int bytesread; public int toread=1024; byte b[]= new byte[toread]; public String f="FINISH"; public String match; public File fileopen; public JTextArea jTextArea; public Button refreshButton; public HttpURLConnection urlConn; public URL url; OutputStreamWriter wr; BufferedReader rd; @Override public void init() { // TODO start asynchronous download of heavy resources //textField= new TextField("START"); //getContentPane().add(textField); JPanel p = new JPanel(); jTextArea= new JTextArea(1500,1500); p.setLayout(new GridLayout(1,1, 1,1)); p.add(new JLabel("Server Details")); p.add(jTextArea); Container content = getContentPane(); content.setLayout(new GridBagLayout()); // Used to center the panel content.add(p); jTextArea.setLineWrap(true); refreshButton = new java.awt.Button("Refresh"); refreshButton.reshape(287,49,71,23); refreshButton.setFont(new Font("Dialog", Font.PLAIN, 12)); refreshButton.addActionListener(this); add(refreshButton); Properties properties = System.getProperties(); properties.put("http.proxyHost", "netmon.iitb.ac.in"); properties.put("http.proxyPort", "80"); } @Override public void actionPerformed(ActionEvent e) { try { url = new URL("http://localhost:3000/audio/audiorecieve"); urlConn = (HttpURLConnection)url.openConnection(); //String login = "mukandagarwal:rammstein$"; //String encodedLogin = new BASE64Encoder().encodeBuffer(login.getBytes()); //urlConn.setRequestProperty("Proxy-Authorization",login); urlConn.setRequestMethod("POST"); // urlConn.setRequestProperty("Content-Type", //"application/octet-stream"); //urlConn.setRequestProperty("Content-Type","audio/mpeg");//"application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Type","application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Length", "" + // Integer.toString(urlParameters.getBytes().length)); urlConn.setRequestProperty("Content-Language", "UTF-8"); urlConn.setDoOutput(true); urlConn.setDoInput(true); byte bread[]=new byte[2048]; int iread; char c; String data=URLEncoder.encode("key1", "UTF-8")+ "="; //String data="key1="; FileInputStream fileread= new FileInputStream("//home//mukand//Hellion.ogg");//Dogs.mp3");//Desktop//mausam1.mp3"); while((iread=fileread.read(bread))!=-1) { //data+=(new String()); /*for(int i=0;i<iread;i++) { //c=(char)bread[i]; System.out.println(bread[i]); }*/ data+= URLEncoder.encode(new String(bread,iread), "UTF-8");//new String(new String(bread));// // data+=new String(bread,iread); } //urlConn.setRequestProperty("Content-Length",Integer.toString(data.getBytes().length)); System.out.println(data); //data+=URLEncoder.encode("mukand", "UTF-8"); //data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8"); //data="key1="; wr = new OutputStreamWriter(urlConn.getOutputStream());//urlConn.getOutputStream(); //if((iread=fileread.read(bread))!=-1) // wr.write(bread,0,iread); wr.write(data); wr.flush(); fileread.close(); jTextArea.append("Send"); // Get the response rd = new BufferedReader(new InputStreamReader(urlConn.getInputStream())); while ((line = rd.readLine()) != null) { jTextArea.append(line); } wr.close(); rd.close(); //jTextArea.append("click"); } catch (MalformedURLException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void start() { } @Override public void stop() { } @Override public void destroy() { } // TODO overwrite start(), stop() and destroy() methods } Here is the Rails controller function for recieving: def audiorecieve puts "///////////////////////////////////////******RECIEVED*******////" puts params[:key1]#+" "+params[:key2] data=params[:key1] #request.env('RAW_POST_DATA') file=File.new("audiodata.ogg", 'w') file.write(data) file.flush file.close puts "////**************DONE***********//////////////////////" end Please reply quickly

    Read the article

  • audio onprogress in chrome not working

    - by user351709
    Hi I am having a problem getting onprogress event for the audio tag working on chrome. it seems to work on fire fox. http://www.scottandrew.com/pub/html5audioplayer/ works on chrome but there is no progress bar update. When I copy the code and change the src to a .wav file and run it on fire fox it works perfectly. <style type="text/css"> #content { clear:both; width:60%; } .player_control { float:left; margin-right:5px; height: 20px; } #player { height:22px; } #duration { width:400px; height:15px; border: 2px solid #50b; } #duration_background { width:400px; height:15px; background-color:#ddd; } #duration_bar { width:0px; height:13px; background-color:#bbd; } #loader { width:0px; height:2px; } .style1 { height: 35px; } </style> <script type="text/javascript"> var audio_duration; var audio_player; function pageLoaded() { audio_player = $("#aplayer").get(0); //get the duration audio_duration = audio_player.duration; $('#totalTime').text(formatTimeSeconds(audio_player.duration)); //set the volume } function update(){ //get the duration of the player dur = audio_player.duration; time = audio_player.currentTime; fraction = time/dur; percent = (fraction*100); wrapper = document.getElementById("duration_background"); new_width = wrapper.offsetWidth*fraction; document.getElementById("duration_bar").style.width = new_width + "px"; $('#currentTime').text(formatTimeSeconds(audio_player.currentTime)); $('#totalTime').text(formatTimeSeconds(audio_player.duration)); } function formatTimeSeconds(time) { var minutes = Math.floor(time / 60); var seconds = "0" + (Math.floor(time) - (minutes * 60)).toString(); if (isNaN(minutes) || isNaN(seconds)) { return "0:00"; } var Strseconds = seconds.substr(seconds.length - 2); return minutes + ":" + Strseconds; } function playClicked(element){ //get the state of the player if(audio_player.paused) { audio_player.play(); newdisplay = "||"; }else{ audio_player.pause(); newdisplay = ">"; } $('#totalTime').text(formatTimeSeconds(audio_player.duration)); element.value = newdisplay; } function trackEnded(){ //reset the playControl to 'play' document.getElementById("playControl").value=">"; } function durationClicked(event){ //get the position of the event clientX = event.clientX; left = event.currentTarget.offsetLeft; clickoffset = clientX - left; percent = clickoffset/event.currentTarget.offsetWidth; duration_seek = percent*audio_duration; document.getElementById("aplayer").currentTime=duration_seek; } function Progress(evt){ $('#progress').val(Math.round(evt.loaded / evt.total * 100)); var width = $('#duration_background').css('width') $('#loader').css('width', evt.loaded / evt.total * width.replace("px","")); } function getPosition(name) { var obj = document.getElementById(name); var topValue = 0, leftValue = 0; while (obj) { leftValue += obj.offsetLeft; obj = obj.offsetParent; } finalvalue = leftValue; return finalvalue; } function SetValues() { var xPos = xMousePos; var divPos = getPosition("duration_background"); var divWidth = xPos - divPos; var Totalwidth = $('#duration_background').css('width').replace("px","") audio_player.currentTime = divWidth / Totalwidth * audio_duration; $('#duration_bar').css('width', divWidth); } </script> </head> <script type="text/javascript" src="js/MousePosition.js" ></script> <body onLoad="pageLoaded();"> <table> <tr> <td valign="bottom"><input id="playButton" type="button" onClick="playClicked(this);" value=">"/></td> <td colspan="2" class="style1" valign="bottom"> <div id='player'> <div id="duration" class='player_control' > <div id="duration_background" onClick="SetValues();"> <div id="loader" style="background-color: #00FF00; width: 0px;"></div> <div id="duration_bar" class="duration_bar"></div> </div> </div> </div> </td> </tr> <tr> <td> </td> <td> <span id="currentTime">0:00</span> </td> <td align="right" > <span id="totalTime">0:00</span> </td> </tr> </table> <audio id='aplayer' src='<%=getDownloadLink() %>' type="audio/ogg; codecs=vorbis" onProgress="Progress(event);" onTimeUpdate="update();" onEnded="trackEnded();" > <b>Your browser does not support the <code>audio</code> element. </b> </audio> </body>

    Read the article

  • Android RTSP coding problem

    - by NetApex
    I have Googled my butt off trying to find where if there is a surefire way to make rtsp work. I have a radio station that I listen to that streams via rtsp. Of course by default Android doesn't want to play it. If I pop the URL into yourmuze.fm and create a station there it lets me stream it to my phone. After checking how it works I come to find that it streams to the phone via rtsp! So obviously there is something amiss. What makes one stream work and one not? This is the stream I am attempting : rtsp://wms2.christiannetcast.com/yes-fm It is an audio stream so I would be thrilled with most peoples problem of "it only does audio and not video." When yourmuze.fm streams, DDMS states it brings up MovieView to play the audio if that helps at all.

    Read the article

  • Embedding wav files in AS3 Flash/Flex project?

    - by aaaidan
    The Flash IDE is capable of embedding many types of uncompressed sound files, including wav, and offers optional compression when publishing. However, the [Embed] tag, only seems to allow embedding of mp3 files. Is it truly impossible to embed an uncompressed wav file, or am I missing some magic, undocumented mimeType? I was hoping for something like: [Embed source="../../audio/wibble.wav" mimeType="audio/wav"] ...but I get no transcoder registered for mimeType 'audio/wav' It's possible to embed wav or other format as an octet-stream and parse at runtime, but that's pretty heavy handed I think. I'm surprised that even though the Flash IDE can embed uncompressed sound data, [Embed] cannot, given that the swf spec can contain uncompressed sound data. Any takers?

    Read the article

  • Streaming server required with JW Player?

    - by Aaron
    Currently, a site I developed plays mp3 files directly in JW Player using the file attribute and public URLs to the mp3 file. This is now an issue with the client for legal reasons, and they now need to stream the audio files so that the users can't open up their cache and nab the files directly after downloading. The JW player site has a bunch of examples for streaming video, but nothing for audio. Is it possible to stream audio files with JW player, and do we have to pay a lot of money for a streaming provider? Is it possible to do on the local php server?

    Read the article

  • How to do a sample rate conversion in Windows (and OSX)

    - by Paperflyer
    I am about to write an audio file converter for my side job at the university. As part of this I would need sample rate conversion. However, my professor said that it would be pretty hard to write a sample rate converter that was both of good quality and fast. On my research on the subject, I found some functions in the OSX CoreAudio-framework, that could do a sample rate conversion (AudioConverter.h). After all, an OS has to have some facilities to do that for its own audio stack. Do you know a similar method for C/C++ and Windows, that are either part of the OS or open source? I am pretty sure that this function exists within DirectX Audio (XAudio2?), but I seem to be unable to find a reference to it in the MSDN library.

    Read the article

  • Can anyone give me a sample DSP script in C/C++

    - by Andrew
    Im working on a (Audio) DSP project and just wondering if there are any sample (Open source) DSP example that are written in c or c++, for my MSP430 Chip. I just want something as a guideline so i can program my own script using the ACD and DCA on my board for sampling. http://focus.ti.com/docs/toolsw/folders/print/msp-exp430f5438.html Thats my board, MSP430F5438 Experimenter Board, from what i herd it can run dsp script via the USB connection with the computer. Im using CCS ( From TI, code composer studio) and Octave/Matlab. Just any DSP example scripts or sites that will help me create my own would be appreciated. What im tying to do, Partial audio (sampled) track -- Nyquist rate sampling -- over- and undersampling -- reconstruction of the audio track.

    Read the article

  • Determining the magnitude of a certain frequency on the iPhone

    - by eagle
    I'm wondering what's the easiest/best way to determine the magnitude of a given frequency in a sound. It's my understanding that a FFT function will return the magnitudes of all frequencies in a signal. I'm wondering if there is any shortcut I could use if I'm only concerned about a specific frequency. I'll be using the iPhone mic to record the audio. My guess is that I'll be using the Audio Queue Services for recording since I don't need to record the audio to a file. I'm using SDK 4.0, so I can use any of the functions defined in the Accelerate framework (e.g. FFT functions) if needed. Update: I updated the question to be more clear as per Conrad's suggestion.

    Read the article

  • feature extraction from acoustic signals

    - by Dolphin
    Hi everyone, It's been a while. I found APIs in Java for extracting features from acoustic audio files and symbolic files separately. But now I have a problem in mapping from low level wav audio features to high level midi features. i.e. I need to write the extracted wav audio features on to midi format. But I cannot think of anything even close to it. Can someone pls provide me some insight as in how I can approach this. Greatly appreciate your responses. Advance thanks

    Read the article

  • How to stream a WAV file?

    - by jonasb
    I'm writing an app where I record audio and upload the audio file over the web. In order to speed up the upload I want to start uploading before I've finished recording. The file I'm creating is a WAV file. My plan was to use multiple data chunks. So instead of the normal encoding (RIFF, fmt , data) I’m using (RIFF, fmt , data, data, ..., data). The first issue is that the RIFF header wants the total length of the whole file, but that is of course not known when streaming the audio (I’m now using an arbitrary number). The other problem is that I'm not sure if it's valid since Audacity doesn't recognise the file, and Windows Media Player opens the file but plays only a very small part. I've been reading WAV specs but haven’t found an answer. Any suggestions?

    Read the article

  • How to use files/streams as source/sink in PulseAudio

    - by Nilesh
    I'm a PulseAudio noob, and I'm not sure if I'm even using the correct terminology. I've seen that PulseAudio can perform echo cancellation, but it needs a source and a sink to filter from, and a new source and sink. I can provide my mic and my audio-out as the source and sink, right? Now, here's my situation: I have two video streams, say, rtmp streams, or consider two flv files, say at any given moment, stream X is the input stream that's coming from another computer's webcam+mic and stream Y is the output stream that I'm sending, (and it's coming from my computer's webcam+mic). Question: Back to the first paragraph - here's the thing, I don't want to use my mic and my audio-out, instead, I want to use these two "input" and "output" streams as my source and sink so to speak (of course, I'll use xuggler maybe, to extract just the audio from X and Y). It may be a strange question, and I have my reasons for doing this strange this - I need to experiment and verify the results to see.

    Read the article

  • C# - .WAV Playback Randomly High Pitch

    - by Nate Shoffner
    For some reason, when a WAV file is played back using the snippet below, it randomly plays back screwy, like a high pitch noise. It doesn't happen all the time, just randomly. It seems to happen more often when it is played back more frequently. The WAV properties are below along with the code snippet I am using. WAV Properties: Bit Rate - 750kbps Audio Sample Size - 16 bit Channels - 1 (mono) Audio Sample Rate - 44kHz Audio Format - PCM Snippet: System.Media.SoundPlayer myPlayer = new System.Media.SoundPlayer(Captcha.Properties.Resources.sound1); myPlayer.Play(); Is this because of the way I am playing the file or the file itself? Thank you.

    Read the article

  • Kde no sound from Phonon or most KDE apps but mplayer,skype and firefox are ok

    - by zeonglow
    Can somebody tell me why I cannot get any sound with most of KDE 4? I'm running a Gentoo box, I'm in both the 'audio' and 'video' groups. I can get sound with mplayer ( but not smplayer ) Firefox and Skype but nothing else. I can't get the test sound to play from the settings window, but Phonon is not whining about broken sound cards when I start up. I have checked with kmix, we seem to be completely unmuted ( and I can get sound with some apps)

    Read the article

  • Linux: how to use Jellyfish from Jack Meterbridge?

    - by klox
    dear all, i have installed Meterbridge. But,i'm just need to use Jellyfish from this package. I changed the Meterbridge properties become: /usr/bin/meterbridge -t jf alsa_pcm:playback_1 alsa_pcm:playback_2 My problem come here, i can open the Jellyfish window but i can't show the wave from input jack. How should i do? have you ever try this? some tell me to set up the Jack Audio Connection Kit, But i don't understand how to do it because i'm new for this

    Read the article

  • Flash/Flex: play embedded AAC audio?

    - by aaaidan
    I'm pretty sure of the answer, but just wanted to check with you all. Is it possible to play an embedded AAC file in Flash/Flex somehow? I know you can playback embedded MP3 files, but I hear that you can't do that with AAC. Anyone know any sneaky ways to get around this? By way of illustration, here's come code. [Embed(source='../../audio/music02.m4a', mimeType="audio/aac")] private static const __ExampleMp4File:Class; public var myMp4Sound:Sound = new __ExampleMp4File(); public function EmbeddedAudioTest() { myMp4Sound.play(); }

    Read the article

  • How do I "link narrations" in PowerPoint 2010 Beta?

    - by Zack Peterson
    I can record narration with PowerPoint 2010, but it seems that it will only embed it into the presentation file. Older versions of PowerPoint allowed the audio to be saved as external sound files. I'd like to perform noise-reduction and minor editing outside of PowerPoint. Has Microsoft removed the "link narrations" option from PowerPoint 2010?

    Read the article

  • Get 5.1 surround sound from computer through a VCR config?

    - by Wedding Nails
    I'm posting to see if my idea of this setup is right and can be done. I currently have the following "equipment": a JVC VCR -quite old-, which has built in surround sound (aka it has several speaker outputs, which I believe is 5.1 and are connected to several speakers that are in every corner of the room), a computer with SPDIF optical output and a new flat screen TV (with built in HDMI). I want the computer to take advantage of the VCR's surround system (all the speakers in the room) in order to play mainly music and video always with all the speakers (5.1) and with the maximum sound quality. Currently, the computer plays sound only through the front speaker (I connect one output to the on board pc audio input) and the quality is really bad. As a side note, the computer video runs with S-video (old school), and the picture quality as you would imagine, is really bad with the new big LCD screen. My main goals are: to upgrade the picture with a new video card which would support HDMI (my tv has HDMI). to buy a SPDIF optical cable, connect one end to the VCR SPDIF input and the other end to the PC output This is theoretically what I've researched so far, and I came out with several questions: in this case, with the SPDIF cable connected, and all the configurations done in windows allowing the 5.1, will I get every content I play "converted" or played through all of my speakers? (I read this forum post). I already know that in order for this setup to play from all the speakers, the content/audio source has to be 5.1. but my question is, if there is a way to play from all of the speakers no matter what type of content I'm playing (that's why I said conversion there) I already know that HDMI cables carry digital sound. Is there a way I can only use said HDMI cord to the tv, and get sound through the VCR? (I'm not too sure about this, I would have to disable the TVs speakers and use the VCR surround as default, but I have no clue wether this can be done or not). Update: The ultimate question is, do I really have to rely on "sound virtualization" technology to get sound from all the speakers, no matter what content I play? (do I require a newer sound card, like a creative soundblaster with said technology?) Thanks!

    Read the article

  • Can I prevent tv-out from disconnecting when monitor sleeps?

    - by Damon
    I have my TV connected to my computer (windows 7) through an HDMI cable that also carries the sound. I like to put on music on my TV, but the problem is when my monitor goes to sleep back in my office, it switches off the hdmi connection completely and the audio goes back to my PC. I would like a way to stop this from happening without turning off sleep mode entirely. I have an ATi Radeon 5700

    Read the article

  • using addListener with WordPress audio player

    - by Jacob
    Hi, I'm trying to add a listener for the stop event in the word press audio player but usage seems to be undocumented. I'm hoping someone who knows a little flash can look at the code and tell me how it works: In the code at http://tools.assembla.com/1pixelout/browser/audio-player/trunk/source/classes/Application.as I see a snippet with this: ExternalInterface.call("AudioPlayer.onStop", _options.playerID); I was hoping that would let me capture the event in javascript with ("player" is the ID of my player) AudioPlayer.addListener("player", "AudioPlayer.onStop", function() { alert('stopped'); }); But my javascript function never seems to get called

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >