Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 86/213 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Steaming a non-PCM WAV file to a SilverLight application

    - by Satumba
    Hi, I would like to allow users to play recorded WAV files that stored on a server back to a Silverlight application as a client to play them. I saw that there is a way to play a WAV file on Silverlight (here), but when i tried to impliment it, i got an error playing the file because it is not in PCM format but encoded. The files that i'm trying to play are encoded with a special encoder, so i thought that the only way is to decode the WAV file on the server and stream it back to the client. The limitation is that the decode process should occur in real time because it is not reasonable to convert all the WAV files that exists. Is it possible to do it? Which streamer can i use? (Windows Media Service can help here?) Does somebody has any experience with such a scenario? Appreciate your help.

    Read the article

  • Wireshark doesnt' recognises RTMP streams

    - by Andrew
    Hello! I found on the web few samples on tracking RTMP (Real Time Messaging Protocol) with Wireshark, but it doesn't work for me. All RTMPT packets rendered as basic TCP packet like this: 149 14.324999 85.115.xxx.xxx 192.168.1.20 TCP macromedia-fcs > 54557 [ACK] Seq=1 Ack=1452 Win=69 Len=0 I'm using Wireshark 1.2.8 with all protocols installed on Windows Vista. What can i do to fix it? Thx!

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • Playing an arbitrary tone with Android.

    - by fiXedd
    Is there any way to make Android emit a sound of arbitrary frequency (meaning, I don't want to have pre-recorded sound files)? I've looked around and ToneGenerator was the only thing I was able to find that was even close, but it seems to only be capable of outputting the standard DTMF tones. Any ideas?

    Read the article

  • Creating a customized video using Flash and XML

    - by Aaron Ladage
    The problem: I have to create a Flash video (in CS3) that will query a MySQL database and display that data at certain points in the video. The bigger problem: I'm not a Flash/ActionScript developer, so this is all very foreign to me! I've divided this project into two parts: a.) dynamically generate an XML feed from the data using PHP (using an ID number passed in the URL's query string), and b.) be able to work with it in Flash. I've got the first part working, but am pretty lost in Flash. I can parse the XML, but I'm not sure how to set the data up as variables and attach it to a video's cue points. Can anyone point me in the direction of a good tutorial or offer some advice?

    Read the article

  • Stream (.NET) handling best-practices

    - by Jader Dias
    The question is entitled with the word "Stream" because the question below is a concrete example of a more generic doubt I have about Streams: I have a problem that accepts two solutions and I want to know the best one: I download a file, save it to disk (2 min), read it and write the contents to the DB (+ 2 min). I download a file and write the contents directly to the DB (3 min). If the write to DB fails I'll have to download again in the second case, but not in the first case. Which is best? Which would you use?

    Read the article

  • capturing video from ip camera

    - by Ruby
    I am trying to capture video from ip camera into my application , its giving exception com.sun.image.codec.jpeg.ImageFormatException: Not a JPEG file: starts with 0x0d 0x0a at sun.awt.image.codec.JPEGImageDecoderImpl.readJPEGStream(Native Method) at sun.awt.image.codec.JPEGImageDecoderImpl.decodeAsBufferedImage(Unknown Source) at test.AxisCamera1.readJPG(AxisCamera1.java:130) at test.AxisCamera1.readMJPGStream(AxisCamera1.java:121) at test.AxisCamera1.readStream(AxisCamera1.java:100) at test.AxisCamera1.run(AxisCamera1.java:171) at java.lang.Thread.run(Unknown Source) its giving exception at image = decoder.decodeAsBufferedImage(); Here is the code i am trying private static final long serialVersionUID = 1L; public boolean useMJPGStream = true; public String jpgURL = "http://ip here/video.cgi/jpg/image.cgi?resolution=640×480"; public String mjpgURL = "http://ip here /video.cgi/mjpg/video.cgi?resolution=640×480"; DataInputStream dis; private BufferedImage image = null; public Dimension imageSize = null; public boolean connected = false; private boolean initCompleted = false; HttpURLConnection huc = null; Component parent; /** Creates a new instance of AxisCamera */ public AxisCamera1(Component parent_) { parent = parent_; } public void connect() { try { URL u = new URL(useMJPGStream ? mjpgURL : jpgURL); huc = (HttpURLConnection) u.openConnection(); // System.out.println(huc.getContentType()); InputStream is = huc.getInputStream(); connected = true; BufferedInputStream bis = new BufferedInputStream(is); dis = new DataInputStream(bis); if (!initCompleted) initDisplay(); } catch (IOException e) { // incase no connection exists wait and try // again, instead of printing the error try { huc.disconnect(); Thread.sleep(60); } catch (InterruptedException ie) { huc.disconnect(); connect(); } connect(); } catch (Exception e) { ; } } public void initDisplay() { // setup the display if (useMJPGStream) readMJPGStream(); else { readJPG(); disconnect(); } imageSize = new Dimension(image.getWidth(this), image.getHeight(this)); setPreferredSize(imageSize); parent.setSize(imageSize); parent.validate(); initCompleted = true; } public void disconnect() { try { if (connected) { dis.close(); connected = false; } } catch (Exception e) { ; } } public void paint(Graphics g) { // used to set the image on the panel if (image != null) g.drawImage(image, 0, 0, this); } public void readStream() { // the basic method to continuously read the // stream try { if (useMJPGStream) { while (true) { readMJPGStream(); parent.repaint(); } } else { while (true) { connect(); readJPG(); parent.repaint(); disconnect(); } } } catch (Exception e) { ; } } public void readMJPGStream() { // preprocess the mjpg stream to remove the // mjpg encapsulation readLine(3, dis); // discard the first 3 lines readJPG(); readLine(2, dis); // discard the last two lines } public void readJPG() { // read the embedded jpeg image try { JPEGImageDecoder decoder = JPEGCodec.createJPEGDecoder(dis); image = decoder.decodeAsBufferedImage(); } catch (Exception e) { e.printStackTrace(); disconnect(); } } public void readLine(int n, DataInputStream dis) { // used to strip out the // header lines for (int i = 0; i < n; i++) { readLine(dis); } } public void readLine(DataInputStream dis) { try { boolean end = false; String lineEnd = "\n"; // assumes that the end of the line is marked // with this byte[] lineEndBytes = lineEnd.getBytes(); byte[] byteBuf = new byte[lineEndBytes.length]; while (!end) { dis.read(byteBuf, 0, lineEndBytes.length); String t = new String(byteBuf); System.out.print(t); // uncomment if you want to see what the // lines actually look like if (t.equals(lineEnd)) end = true; } } catch (Exception e) { e.printStackTrace(); } } public void run() { System.out.println("in Run..................."); connect(); readStream(); } @SuppressWarnings("deprecation") public static void main(String[] args) { JFrame jframe = new JFrame(); jframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); AxisCamera1 axPanel = new AxisCamera1(jframe); new Thread(axPanel).start(); jframe.getContentPane().add(axPanel); jframe.pack(); jframe.show(); } } Any suggestions what I am doing wrong here??

    Read the article

  • No mic activity with setLoopBack set to false - AS3

    - by Franky
    Trying to figure out why setloopback needs to be set to true for microphone activity to be detected. The problem is the echo feedback when using a macbook with a built in mic. If anyone has some ideas about this let me know. Right now I'm experimenting with toggling gain, depending on activity to simulate echo reduction. Not optimal though. @lessfame

    Read the article

  • XMLStreamReader and a real stream

    - by Yuri Ushakov
    Update There is no ready XML parser in Java community which can do NIO and XML parsing. This is the closest I found, and it's incomplete: http://wiki.fasterxml.com/AaltoHome I have the following code: InputStream input = ...; XMLInputFactory xmlInputFactory = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xmlInputFactory.createXMLStreamReader(input, "UTF-8"); Question is, why does the method #createXMLStreamReader() expects to have an entire XML document in the input stream? Why is it called a "stream reader", if it can't seem to process a portion of XML data? For example, if I feed: <root> <child> to it, it would tell me I'm missing the closing tags. Even before I begin iterating the stream reader itself. I suspect that I just don't know how to use a XMLStreamReader properly. I should be able to supply it with data by pieces, right? I need it because I'm processing a XML stream coming in from network socket, and don't want to load the whole source text into memory. Thank you for help, Yuri.

    Read the article

  • Pros and cons of MPMoviePlayerController versus launching UIWebView to stream movie

    - by Nosredna
    I have a client who has video content for the web in Flash format. My task is to help them show the videos in an iPhone app. I realize that step one is to get these videos into the appropriate Quicktime format for the iPhone. Then I'm going to have to help the client figure out how or where to host these files. If that's tricky I assume they can be hosted at YouTube. My chief concern, though, is which approach to take to stream the video. What are the pros and cons of MPMoviePlayerController versus launching UIWebView with the URL of the stream? Is there any difference? Is one of them more or less forgiving? Is one of them a better user experience? Any gotchas I might expect to run into? I'm assuming playing video is pretty easy on the iPhone. Is it reasonable to try both and have one available as a fallback, or would that be a waste of time? I'm trying to schedule this out a bit, so I'd love to hear real-world experiences from anyone who's done this.

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • What is the best way to merge mp3 files?

    - by Dan Williams
    I've got many, many mp3 files that I would like to merge into a single file. I've used the command line method copy /b 1.mp3+2.mp3 3.mp3 but it's a pain when there's a lot of them and their namings are inconsistent. The time never seems to come out right either.

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • Pitch detection and change java

    - by omegas27
    Hello, I'm french so I'm sorry if you have trouble to understand some of my sentences. Aniways, I saw in some topics that the pitch could be fetected thanks to the Fourier transform but I didn't really understand how to implement it. Moreover, I didn't find how to change the pitch of a wav file and if possibl ,a mp3 file I am listening to music using javaSound for the wav and JLayer for the mp3. Thanks

    Read the article

  • Extracting note onset from MIDI

    - by Dolphin
    Hi I need to extract musical features (note details-pitch, duration, rhythm, loudness, note start time) from a polyphonic (having 2 scores for treble and bass - bass may also have chords) MIDI file. I'm using the jMusic API to extract these details from a MIDI file. My approach is to go through each score, into parts, then phrases and finally notes and extract the details. With my approach, it's reading all the treble notes first and then the bass notes - but chords are not captured (i.e. only a single note of the chord is taken), and I cannot identify from which point onwards are the bass notes. So what I tried was to get the note onsets (i.e. the start time of note being played) - since the starting time of both the treble and bass notes at the start of the piece should be same - But I cannot extract the note onset using jMusic API. Each time it shows 0.0. Is there any way I can identify the voice (treble or bass) of a note? And also all the notes of a chord? How is the voice or note onset for each note stored in MIDI? Is this different for each MIDI file? Any insight is greatly appreciated. Thanks in advance

    Read the article

  • JWPlayer plays videos at 3x speed on the first run, then works fine in subsequent runs on the same p

    - by Josiah Kiehl
    Go here: http://nano.materials.drexel.edu/research/videolibrary For some bizarre reason, the videos will play at 3x speed on the first run through, but then will play at normal speed each subsequent play. This doesn't happen all the time, and it's not always the same video(s) that do it. I'm utterly baffled. I've reconverted the videos from m4p to flv (using BitComet's converter) several times, double checking the settings each time through with no change to the behavior. Anyone have a clue what's going on?

    Read the article

  • Is there any live video stream editing open source project with API for my needs?

    - by Ole Jak
    I need an open source project with an API capable of reading a live video stream (stream codec can be any API can read - I can provide with practically any live streamable one) giving me last image data for some processing (like brightness\contrast or more exotic filtering) being able to receive data I've changed and starting to stream that data on to some http://localhost:port/ in some format I need it to be easily accessible from C# (even better, written in C#).

    Read the article

  • Playing Multiple sounds at the same time in Android

    - by Wrapper
    I am unable to use the following to code to play multiple sounds/beeps simultaneously. In my onclicklistener I have added ... public void onClick(View v) { mSoundManager.playSound(1); mSoundManager.playSound(2); } ... But this plays only one sound at a time, sound with index 1 followed by sound with index 2. How can I play atleast 2 sounds simultaneously using this code whenever there is an onClick() event? public class SoundManager { private SoundPool mSoundPool; private HashMap<Integer, Integer> mSoundPoolMap; private AudioManager mAudioManager; private Context mContext; public SoundManager() { } public void initSounds(Context theContext) { mContext = theContext; mSoundPool = new SoundPool(4, AudioManager.STREAM_MUSIC, 0); mSoundPoolMap = new HashMap<Integer, Integer>(); mAudioManager = (AudioManager)mContext.getSystemService(Context.AUDIO_SERVICE); } public void addSound(int Index,int SoundID) { mSoundPoolMap.put(1, mSoundPool.load(mContext, SoundID, 1)); } public void playSound(int index) { int streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); mSoundPool.play(mSoundPoolMap.get(index), streamVolume, streamVolume, 1, 0, 1f); } public void playLoopedSound(int index) { int streamVolume = mAudioManager.getStreamVolume(AudioManager.STREAM_MUSIC); mSoundPool.play(mSoundPoolMap.get(index), streamVolume, streamVolume, 1, -1, 1f); } }

    Read the article

  • Android PCM Bytes

    - by Pintac
    Hi I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic. So thats working nicely. Now i need convert the pcm bytes into decibel. I have a formula that takes sound presure in Pa into db. db = 20 * log10(Pa/ref Pa) So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what. I tried to putting the value into te formula but it comes back with very hight db so i do not think its right thanks

    Read the article

  • AVAudioPlayer currentTime problem

    - by StrAbZ
    Hi, I'm trying to use the audioplayer with a slider in order to seek into a track (nothing complicated). But I have a weird behavior... for some value of currentTime (between 0 and trackDuration), the player stop playing the track, and goes into "audioPlayerDidFinishPlaying:successfully:" with successfully to NO. And it did not go into "audioPlayerDecodeErrorDidOccur:error:" It's like it can't read the time i'm giving to it. For exemple the duration of the track is: 295.784424 seconds i set the currentTime to 55.0s (ie: 54.963878 or 54.963900 or 54.987755, etc... when printed as %f). The "crashes" always happen when the currentTime is 54.987755... and I really don't understand why... So if you have any idea... ^^

    Read the article

  • Getting following exception javax.sound.sampled.LineUnavailableException: line with format ULAW 800

    - by angelina
    Dear All, I tried to play and get duration of a wave file using code below but got following exception.please resolve.I m using a wave file format. URL url = new URL("foo.wav"); Clip clip = AudioSystem.getClip(); AudioInputStream ais = AudioSystem.getAudioInputStream(url); clip.open(ais); System.out.println(clip.getMicrosecondLength()); **javax.sound.sampled.LineUnavailableException: line with format ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame, not supported.**

    Read the article

  • Android Stream Data Over Wifi?

    - by Neb
    Im trying to make an app for android that will stream the data of the accelerometer to be used as a game controller on my pc over a local wifi connection. Is it possible to make some kind of wifi stream of the accelerometer values in the android app and then make the pc somehow 'read' this stream? Or would it just be better for the pc to make endless calls to the phone getting the newest accelerometer values from a local android server? It would also have to send commands from the phone such as 'button1 pressed', 'button1 released'.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >