Search Results

Search found 6501 results on 261 pages for 'audio conversion'.

Page 143/261 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Enhanced Podcasts and MPMoviePlayerViewController

    - by Ben Robinson
    Hi, This is a bit of an odd/specific one - possibly a bug? I'm using MPMoviePlayerViewController to play a variety of files, including Enhanced Podcasts - these are audio files, but with a slideshow of images, often created using GarageBand. Until (i think) iOS 3.2 they weren't supported at all, now they are and play fine in the iPod app, but in my app the slideshow doesn't start, the full screen movie player opens, and the audio begins, but all I see is the QuickTime logo. If I scrub the track the pictures appear - and will continue to play correctly - but I see nothing if I don't scrub! Any ideas?? On a related note, these files also include a small rectangluar button containing an (i) button on the right hand side - anybody know what it is or should do?! It does nothing for me!

    Read the article

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • How do I stop Safari from caching my Servlet response?

    - by Cliff
    I'm having trouble testing a web app with Safari. My app returns wave audio data. The problem happens when I change the application and hit it again from Safari. Safari caches the original response so no matter how many times I hit refresh it seems like I've not updated anything. I can almost get around this using force refresh with Firefox but because I'm having trouble generating the wave headers using the javax.sound API Firefox only plays the first second of audio returned. A few weeks ago I tried setting the HTTP header in my servlet to prevent caching but I don't think I was setting it correctly. (What is the header for browser cache control?) This is becoming a real pain and I'm looking for any ideas, comments, or alternative approaches. I'm getting ready to try again but I figured I'd ask here in the interim to see if someone can provide help.

    Read the article

  • iSightAudio.plugin error when playing video using MediaPlayer

    - by Elisabeth
    I am working on creating a simple iPhone app that plays a movie via URL. When I Build&Run to test in the simulator, it works fine; as soon as I start playing the movie, I get the following message in the console: [1757:4b03] Cannot find executable for CFBundle/CFPlugIn 0x820ffe0 </Library/Audio/Plug-Ins/HAL/iSightAudio.plugin> (not loaded) [1757:4b03] Cannot find function pointer iSightAudioNewPlugIn for factory 9BE7661E-8AEF-11D7-8692-000A959F49B0 in CFBundle/CFPlugIn 0x820ffe0 </Library/Audio/Plug-Ins/HAL/iSightAudio.plugin> (not loaded) I don't get this error on other programs, so I assume it has something to do with this specific program, which uses the MediaPlayer.framework. Does anyone know what is causing this problem and how to fix it? Thank you

    Read the article

  • How do I get callgrind to dump source line information?

    - by Jeremybub
    I'm trying to profile a shared library on GNU/Linux which does real-time audio processing, so performance is important. I run another program which hooks it up to the audio input and output of my system, and profile that with callgrind. Looking at the results in KCacheGrind, I get great information about what functions are taking up most of my time. However, it won't let me look at the line by line information, and instead says I need to compile it with debugging symbols and run the profiling again. The program which I am profiling is not compiled with debug symbols, but the library is. And I know this, because interestingly, source code annotations for cachegrind work fine. When I run callgrind, it says the default is to dump source line information, but it just isn't doing that. Is there some way I could force it to, or figure out what's stopping it?

    Read the article

  • How to disable UI control based on domain object's state?

    - by Subb
    Here's my problem. I have a somewhat complex domain object, which, depending on its state, responds to certain actions. I think the state pattern is pretty much the solution for that. However, I need to display which actions are possible at any moment in the UI. Ex: The domain object is an audio player. Some songs can't be skipped (like ads), so I need to disable the "next" and "previous" buttons in the GUI so the user have some kind of feedback of which action he can execute. I've looked at Swing's Action class (note: this is not a Java project), but I think I would need to keep every Actions in my domain object class (audio player), so it can enable or disable them depending on its own state (thus, affecting the UI). Is it the way to do it?

    Read the article

  • Signal amplitude against time (java)

    - by wsr74ws84
    Hi everyone , I'm racking my brain in order to solve a knotty problem (at least for me) While playing an audio file (using java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope .(SPECTRUM ANALYZER) The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time) Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know vwry little about java , I wish someone could help me . Thanks in advance.

    Read the article

  • getAssetFileDescriptor from ZipResourceFile merges all mp3 in mediaplayer SOLVED

    - by Jordi
    I've a program with an Expansion file that stores 4 mp3 in a obb file (zip without compression). I can retrieve the data, but instead of taking the audio file i asked for, it merges ALL audio files in the same AssetFileDescriptor. ---SOLVED--- with the fixes Support class public AssetFileDescriptor getaudio(){ ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(c,21,21); AssetFileDescriptor afd=null; if(take==1) { afd = expansionFile.getAssetFileDescriptor("file01.mp3"); }else if(take==2 { afd = expansionFile.getAssetFileDescriptor("file02.mp3"); } //more els eif ............ return afd; } In the MediaPlayer class AssetFileDescriptor fd = Llistat.getInstance().getAudio(); mPlayer.setDataSource(fd.getFileDescriptor(), fd.getStartOffset(),fd.getLength()); mPlayer.prepare(); fd.close(); My problem was that i directly was returning and using a FileDescriptor, while i was needing the AssetFileDescriptor to take its StartOffset and Length.

    Read the article

  • Onclick starts gif animation and .mp3, how to sync across browsers

    - by user2958163
    So I am using a text-based jplayer (http://jplayer.org/latest/demo-04/) that I want to sync with a gif animation. Onclick the text link does two things -1. feed the jplayer an mp3 and 2. trigger an animation (via SwapImage). It is important for these two to start at the same time. Right now, this works perfectly in chrome/firefox but in IE and mobile browsers the audio lags considerably. I have tried with the audio preloaded (it is a small 40K mp3) and it makes no difference. I dont think its a bandwidth problem because the problem is the same on repeat clicks. Any pointers on how I can resolve this...

    Read the article

  • avmutablecomposition insertEmptyTimeRange

    - by smartfaceweb
    I have created an avmutablecomposition and tried to use insertEmptyTimeRange to generate 1 minute of silence. This doesn't appear to be working. I have also tried creating an avmutablecompositiontrack using addMutableTrackWithMediaType:preferredTrackID: and then insertEmptyTimeRange on the track and still no success. To give some background on my app, I allow users to add audio samples a timeline and then playback or export and this is working really well using the av classes. The problem is that I need to make sure that the audio is exactly 1 min (for example). Regardless of the info about my specific app above, is it possible to insert an empty time range into a comp or comptrack?

    Read the article

  • html5 video player with simplest controls (only play and paus)

    - by mathiregister
    hi guys, somehow there are really little tutorials out there for html5 video and audio playback. I simply want to embed video and audio files with customized controls. However the controls should be farely simple. I only need a play-button. If clicked, play gets replaced by pause. that's all! however i even don't know how to embed/display a video without "preload controls". Somehow if i only set (without preload controls) Firefox even don't shows anything. Chrome does show a black window. I would love to be able to use jquery to control the video play and pause button. Maybe you have some little start-approach for me! thank you very much!

    Read the article

  • Multimedia content in REST responce(XML/JSON)

    - by Koushik
    In my thesis I need to test different architectures. A request to a REST web service developed using Apache CXF and Spring MVC with MySQL as back end serving references(a field in database) to images,audio and video files stored in file system. In the response message, what is the best method to send the content to the client(another application using the service which I developed). URI: http://www.filmservices.com/film/{id} A client here is not the end user. Send the encoded hyperlink's(where the content is stored in the file system) to the client, so that the client renders the response and displays it to the browser. Use Base64 to encode the message(image,audio,video) and send it to the client. Main concern is performance.

    Read the article

  • VLC helper protocol on Mac OS X

    - by Preben
    Hey everybody, I am trying to add a vlc:// helper protocol on Mac OS X. To register the protocol, I have unsuccessfully been playing around with the MoreInternet PrefPane. What I want to have in my browser is a vlc://someressource.com/audio.mp3, which should launch VLC and add http://someressource.com/audio.mp3 to the playlist (this works fine on Windows and also Linux if I remember correctly). Maybe even just have vlc://http:// so that https would also be supported. I have no idea how to achieve this. I tried making a bash script, which MoreInternet would not accept. Then I tried making an application through Automator with my Bash script embedded. That did not work either, as the Automator application has no "creator code" - whatever that is?! Can any of you guys point me in the right direction? Thanks in advance!

    Read the article

  • What does this error mean when using OpenAL in the iPhone Simulator?

    - by mystify
    I'm getting this in the console, when creating my OpenAL Sources and Buffers: Cannot find executable for CFBundle/CFPlugIn 0xf530d0 </Library/Audio/Plug-Ins/HAL/Hear.plugin> (not loaded) 2010-05-05 17:11:13.934 Testproj[43173:207] Cannot find function pointer HearCFPlugInFactory for factory 5268FAAB-0147-4272-93FD-4D60A2433C1C in CFBundle/CFPlugIn 0xf430d0 </Library/Audio/Plug-Ins/HAL/Hear.plugin> (not loaded) However, the sounds play nicely. I think HAL is not available on the iPhone, it's just on Mac OS X, right? Do you guys also get that error in the simulator when using OpenAL?

    Read the article

  • Controlling Windows Media Player through PHP / batch files

    - by Duroth
    I'm currently writing a tiny webapp for my HTPC (actually, a PC serving as both a media player and web- / fileserver) that will allow me to remotely control the playing of audio, without having to turn on my TV just to switch songs. I'm using Windows Media Player as my audio player of choice, and I thought I could control it through PHP's COM Class. Unfortunately, I've not been able to find any documentation or examples on controlling WMP through this interface. Can anyone point me in the right direction here? A second (and much less preferable) solution would be to use PHP's exec() call to start batch files that, in turn, control WMP.

    Read the article

  • Ripping a CD to mp3 in C# - third party component or api out there?

    - by Jonathan Williamson
    We're working on a project that requires the ripping of audio tracks from CDs to MP3s (ideally also retrieving the track information from CDDB or similar). More background information: Various music labels send us CDs of music which we then deliver to people via an online delivery system. We're looking at automating the process of converting those CDs into MP3s with full track information where possible. We want to produce a simple desktop application that allows a member of editorial staff to setup the information about the new music we receive. To streamline the process we'd like to include the ripping of the audio and retrieval of the track information.

    Read the article

  • Local Live Quicktime Video Broadcast, latency?

    - by Snowwire
    I'm looking into the feasibility of using a local server to distribute live video of a conference to delegates in the same room. They would still hear the live audio coming from the speaker, so only the video would be streamed. I was considering a Darwin Steaming Server (a lot of iPhone users to support) and encoding with H.264. My main concern is latency across the network. Even with everything running locally, would there be lip sync issues between the live audio and the 'live' video stream? It feels like there will be problems given the encoding, broadcasting, decoding to be completed, but I've never done any like this before so thought I would check. Thanks

    Read the article

  • android settings (provider)?

    - by lorenzoff
    Hello to all, i'd like to substitute the default preferences setting activity (vertical, icon-title pair list) with a grid view that better fit a large landscape display. About, let's say, the audio preferences, i add an icon to a GridView and, on item click event, i use this code startActivityForResult(new Intent(android.provider.Settings.ACTION_SOUND_SETTINGS), 0); Is it possible to get the default settings icon? If i could use an ipotetic getDrawable(android.R.drawable.default_icon_for_audio_preferences_settings), I would to maintain the default icon also in my preferences grid. A preferences provider exists? Looking at my development device preferences i can see Wireless, call, audio, display and so on. I have to add the same preferences to my grid because i know a priori about their existence or there is a provider that can supply me this array? Thanks in advance L.

    Read the article

  • Button in App on iOS 4.2 won't work properly

    - by MatthiasC
    My app, written for the iPad contains several UIButtons. One of them starts and stops an AVAudioPlayer: hitting the button once starts the player, hitting it again stops it accordingly. Rinse and repeat. This all works nicely on iOS version pre 4.2. When installing the App on an iPad with iOS 4.2, the button turns on and off the audio player exactly once, then stops working properly: after hitting the button, it turns into its selected state, but it does not start the audio player (as it should) and hitting the button again doesn't return it to its default state, either. As previously said: it's all fine and dandy pre 4.2, the problem only arises on the most current OS version. XCode 3.2.4 iPad with iOS 4.2

    Read the article

  • how to use a wav file in eclipse

    - by AlphaAndOmega
    I've been trying to add audio to a project I've been doing. I found some code on here for html that is also supposed to work with file but it keeps saying "Exception in thread "main" javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file at javax.sound.sampled.AudioSystem.getAudioInputStream(Unknown Source) at LoopSound.main(LoopSound.java:15)" the code public class LoopSound { public static void main(String[] args) throws Throwable { File file = new File("c:\\Users\\rabidbun\\Pictures\\10177-m-001.wav"); Clip clip = AudioSystem.getClip(); // getAudioInputStream() also accepts a File or InputStream AudioInputStream wav = AudioSystem.getAudioInputStream( file ); clip.open(wav); // loop continuously clip.loop(-1000); SwingUtilities.invokeLater(new Runnable() { public void run() { // A GUI element to prevent the Clip's daemon Thread // from terminating at the end of the main() JOptionPane.showMessageDialog(null, "Close to exit!"); } }); } } What is wrong with the code?

    Read the article

  • iPhone/iPad fatal error in C++ code produces no output in the log

    - by morgancodes
    I'm trying to move away from Objective-C to C++ for audio in my iPad programming, due to the a few reports I've heard of Objective-C selectors sometimes causing audio glitches. So I'm starting to use pure C++ files. When a fatal error happens in one of the C++ files, I get no output from the log. The app just crashes. For example, if I do this in my C++ file: env = new ADSR(); cout << "setting env to null\n"; env = NULL; env->setSustainLevel(1); cout << "called function on non-initialized env\n"; I get the following output: setting env to null After that, there's a method called on NULL, which apparently kills the app, but absolutely nothing to that effect is reported. What do I need to do to have useful information logged when there's an error in my C++ code?

    Read the article

  • How to store matrix information in MySQL?

    - by dedalo
    Hi, I'm working on an application that analizes music similarity. In order to do that I proccess audio data and store the results in txt files. For each audio file I create 2 files, 1 containing and 16 values (each value can be like this:2.7000023942731723) and the other file contains 16 rows, each row containing 16 values like the one previously shown. I'd like to store the contents of these 2 file in a table of my MySQL database. My table looks like: Name varchar(100) Author varchar (100) in order to add the content of those 2 file I think I need to use the BLOB data type: file1 blob file2 blob My question is how should I store this info in the data base? I'm working with Java where I have a double array containing the 16 values (for the file1) and a matrix containing the file2 info. Should I process the values as strings and add them to the columns in my database? Thanks

    Read the article

  • Read data from an Android USB attachment

    - by Mark
    Is there anyway to read data from an attachment through the USB port on an Android device? In particular, an EKG. Most the work can be done by the hardware of the device to simplify the output to a single number, a voltage reading. If its not possible, what about modifying an accessory that can already communicate with an android device? Thinking of devices that attach to android phones, what about sending the data as an audio signal to be read as the microphone from a headset and then analyzing the audio signal to convert it to a number that can be used to display a value. Any ideas on how to make this work?

    Read the article

  • [c#] SoundPlayer.PlaySync stopping prematurely

    - by JeffE
    I want to play a wav file synchronously on the gui thread, but my call to PlaySync is returning early (and prematurely stopping playback). The wav file is 2-3 minutes. Here's what my code looks like: //in gui code (event handler) //play first audio file JE_SP.playSound("example1.wav"); //do a few other statements doSomethingUnrelated(); //play another audio file JE_SP.playSound("example2.wav"); //library method written by me, called in gui code, but located in another assembly public static int playSound(string wavFile, bool synchronous = true, bool debug = true, string logFile = "", int loadTimeout = FIVE_MINUTES_IN_MS) { SoundPlayer sp = new SoundPlayer(); sp.LoadTimeout = loadTimeout; sp.SoundLocation = wavFile; sp.Load(); switch (synchronous) { case true: sp.PlaySync(); break; case false: sp.Play(); break; } if (debug) { string writeMe = "JE_SP: \r\n\tSoundLocation = " + sp.SoundLocation + "\r\n\t" + "Synchronous = " + synchronous.ToString(); JE_Log.logMessage(writeMe); } sp.Dispose(); sp = null; return 0; } Some things I've thought of are the load timeout, and playing the audio on another thread and then manually 'freeze' the gui by forcing the gui thread to wait for the duration of the sound file. I tried lengthening the load timeout, but that did nothing. I'm not quite sure what the best way to get the duration of a wav file is without using code written by somebody who isn't me/Microsoft. I suppose this can be calculated since I know the file size, and all of the encoding properties (bitrate, sample rate, sample size, etc) are consistent across all files I intend to play. Can somebody elaborate on how to calculate the duration of a wav file using this info? That is, if nobody has an idea about why PlaySync is returning early. Of Note: I encountered a similar problem in VB 6 a while ago, but that was caused by a timeout, which I don't suspect to be a problem here. Shorter (< 1min) files seem to play fine, so I might decide to manually edit the longer files down, then play them separately with multiple calls.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >