Search Results

Search found 7178 results on 288 pages for 'audio playing'.

Page 110/288 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Creating a music catalog and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs.

    Read the article

  • How do you limit a page with multiple flash mp3 players to play one at a time?

    - by Andrew.S
    I am working with the open source flash player at http://flash-mp3-player.net/ and I am trying to figure out how to limit one sound file at a time. I know this has been done on a number of sites but I am unsure how to approach it. Scenario: A page has five different instances of the flash player. The user is litening to one song but clicks on another to listen to it. Goal: The first audio file automatically stops while the second starts playing instead of both playing at the same time. Do I need to have some sort of javascript handler than interacts with the swf or something?

    Read the article

  • Properly trimming PCM data from a ByteArray

    - by Lowgain
    I have a situation where I need to trim a small amount of audio from the beginning of a recorded clip (generally somewhere between 110-150ms, it is an inconsistent amount). I'm recording in 44100 frequency and 16 bitrate. This is the code I'm using: public function get trimmedData():ByteArray { var ba:ByteArray = new ByteArray(); var bitPosition:uint = 44100 * 16 * (recordGap / 1000); bitPosition -= int(bitPosition % 16); //should keep snapped to nearest sample, I hope ba.writeBytes(_rawData, (bitPosition / 8)); return ba; } This seems to work time-wise, but all the recorded audio gets staticy and gross. Is something off about my rounding? This is the first time I've needed to alter raw PCM data so I'm not sure about the finer details of it. Thanks!

    Read the article

  • Enhanced Podcasts and MPMoviePlayerViewController

    - by Ben Robinson
    Hi, This is a bit of an odd/specific one - possibly a bug? I'm using MPMoviePlayerViewController to play a variety of files, including Enhanced Podcasts - these are audio files, but with a slideshow of images, often created using GarageBand. Until (i think) iOS 3.2 they weren't supported at all, now they are and play fine in the iPod app, but in my app the slideshow doesn't start, the full screen movie player opens, and the audio begins, but all I see is the QuickTime logo. If I scrub the track the pictures appear - and will continue to play correctly - but I see nothing if I don't scrub! Any ideas?? On a related note, these files also include a small rectangluar button containing an (i) button on the right hand side - anybody know what it is or should do?! It does nothing for me!

    Read the article

  • Why does use of H264 in sender/receiver pipelines introduce just HUGE delay?

    - by Serguey Zefirov
    When I try to create pipeline that uses H264 to transmit video, I get some enormous delay, up to 10 seconds to transmit video from my machine to... my machine! This is unacceptable for my goals and I'd like to consult StackOverflow over what I (or someone else) do wrong. I took pipelines from gstrtpbin documentation page and slightly modified them to use Speex: This is sender pipeline: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! ffenc_h263 ! rtph263ppay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 Receiver pipeline: !/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H263-1998" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false Those pipelines, a combination of H263 and Speex, work fine enough. I snap my fingers near camera and micropohne and then I see movement and hear sound at the same time. Then I changed pipelines to use H264 along the video path. The sender becomes: #!/bin/sh gst-launch -v gstrtpbin name=rtpbin \ v4l2src ! ffmpegcolorspace ! x264enc bitrate=300 ! rtph264pay ! rtpbin.send_rtp_sink_0 \ rtpbin.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5000 \ rtpbin.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5001 sync=false async=false \ udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0 \ pulsesrc ! audioconvert ! audioresample ! audio/x-raw-int,rate=16000 ! \ speexenc bitrate=16000 ! rtpspeexpay ! rtpbin.send_rtp_sink_1 \ rtpbin.send_rtp_src_1 ! udpsink host=127.0.0.1 port=5002 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5003 sync=false async=false \ udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1 And receiver becomes: #!/bin/sh gst-launch -v\ gstrtpbin name=rtpbin \ udpsrc caps="application/x-rtp,media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" \ port=5000 ! rtpbin.recv_rtp_sink_0 \ rtpbin. ! rtph264depay ! ffdec_h264 ! xvimagesink \ udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \ rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false \ udpsrc caps="application/x-rtp,media=(string)audio, clock-rate=(int)16000, encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" \ port=5002 ! rtpbin.recv_rtp_sink_1 \ rtpbin. ! rtpspeexdepay ! speexdec ! audioresample ! audioconvert ! alsasink \ udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \ rtpbin.send_rtcp_src_1 ! udpsink host=127.0.0.1 port=5007 sync=false async=false This is what happen under Ubuntu 10.04. I didn't noticed such huge delays on Ubuntu 9.04 - the delays there was in range 2-3 seconds, AFAIR.

    Read the article

  • Best way to play MIDI sounds using C#

    - by jerhinesmith
    I'm trying to rebuild an old metronome application that was originally written using MFC in C++ to be written in .NET using C#. One of the issues I'm running into is playing the midi files that are used to represent the metronome "clicks". I've found a few articles online about playing MIDI in .NET, but most of them seem to rely on custom libraries that someone has cobbled together and made available. I'm not averse to using these, but I'd rather understand for myself how this is being done, since it seems like it should be a mostly trivial exercise. So, am I missing something? Or is it just difficult to use MIDI inside of a .NET application?

    Read the article

  • How do I stop Safari from caching my Servlet response?

    - by Cliff
    I'm having trouble testing a web app with Safari. My app returns wave audio data. The problem happens when I change the application and hit it again from Safari. Safari caches the original response so no matter how many times I hit refresh it seems like I've not updated anything. I can almost get around this using force refresh with Firefox but because I'm having trouble generating the wave headers using the javax.sound API Firefox only plays the first second of audio returned. A few weeks ago I tried setting the HTTP header in my servlet to prevent caching but I don't think I was setting it correctly. (What is the header for browser cache control?) This is becoming a real pain and I'm looking for any ideas, comments, or alternative approaches. I'm getting ready to try again but I figured I'd ask here in the interim to see if someone can provide help.

    Read the article

  • How do I get callgrind to dump source line information?

    - by Jeremybub
    I'm trying to profile a shared library on GNU/Linux which does real-time audio processing, so performance is important. I run another program which hooks it up to the audio input and output of my system, and profile that with callgrind. Looking at the results in KCacheGrind, I get great information about what functions are taking up most of my time. However, it won't let me look at the line by line information, and instead says I need to compile it with debugging symbols and run the profiling again. The program which I am profiling is not compiled with debug symbols, but the library is. And I know this, because interestingly, source code annotations for cachegrind work fine. When I run callgrind, it says the default is to dump source line information, but it just isn't doing that. Is there some way I could force it to, or figure out what's stopping it?

    Read the article

  • How to disable UI control based on domain object's state?

    - by Subb
    Here's my problem. I have a somewhat complex domain object, which, depending on its state, responds to certain actions. I think the state pattern is pretty much the solution for that. However, I need to display which actions are possible at any moment in the UI. Ex: The domain object is an audio player. Some songs can't be skipped (like ads), so I need to disable the "next" and "previous" buttons in the GUI so the user have some kind of feedback of which action he can execute. I've looked at Swing's Action class (note: this is not a Java project), but I think I would need to keep every Actions in my domain object class (audio player), so it can enable or disable them depending on its own state (thus, affecting the UI). Is it the way to do it?

    Read the article

  • Onclick starts gif animation and .mp3, how to sync across browsers

    - by user2958163
    So I am using a text-based jplayer (http://jplayer.org/latest/demo-04/) that I want to sync with a gif animation. Onclick the text link does two things -1. feed the jplayer an mp3 and 2. trigger an animation (via SwapImage). It is important for these two to start at the same time. Right now, this works perfectly in chrome/firefox but in IE and mobile browsers the audio lags considerably. I have tried with the audio preloaded (it is a small 40K mp3) and it makes no difference. I dont think its a bandwidth problem because the problem is the same on repeat clicks. Any pointers on how I can resolve this...

    Read the article

  • getAssetFileDescriptor from ZipResourceFile merges all mp3 in mediaplayer SOLVED

    - by Jordi
    I've a program with an Expansion file that stores 4 mp3 in a obb file (zip without compression). I can retrieve the data, but instead of taking the audio file i asked for, it merges ALL audio files in the same AssetFileDescriptor. ---SOLVED--- with the fixes Support class public AssetFileDescriptor getaudio(){ ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(c,21,21); AssetFileDescriptor afd=null; if(take==1) { afd = expansionFile.getAssetFileDescriptor("file01.mp3"); }else if(take==2 { afd = expansionFile.getAssetFileDescriptor("file02.mp3"); } //more els eif ............ return afd; } In the MediaPlayer class AssetFileDescriptor fd = Llistat.getInstance().getAudio(); mPlayer.setDataSource(fd.getFileDescriptor(), fd.getStartOffset(),fd.getLength()); mPlayer.prepare(); fd.close(); My problem was that i directly was returning and using a FileDescriptor, while i was needing the AssetFileDescriptor to take its StartOffset and Length.

    Read the article

  • Signal amplitude against time (java)

    - by wsr74ws84
    Hi everyone , I'm racking my brain in order to solve a knotty problem (at least for me) While playing an audio file (using java) I want the signal amplitude to be displayed against time. I mean I'd like to implement a small panel showing a sort of oscilloscope .(SPECTRUM ANALYZER) The audio signal should be viewed in the time domain (vertical axis is amplitude and the horizontal axis is time) Does anyone know how to do it? Is there a good tutorial I can rely on? Since I know vwry little about java , I wish someone could help me . Thanks in advance.

    Read the article

  • Games that are still winable against the computer?

    - by roygbiv
    There's a game on my laptop called 'Chess Titans' which I've been playing one game a day for almost 90 days. With the difficulty on the hardest setting I have not been able to win one game, however, I have come close. What's the fun in playing a chess game if the computer can search all moves and win? Has (or can) anyone beat a modern computer chess AI? What games can't a computer gain an advantage in? (i.e. They would be 'fun' to play.)

    Read the article

  • flvplayer.swf player to play the video?

    - by Surya sasidhar
    Hi, I am using flvplayer.swf player, it is playing the videos,but before playing the video the player has black screen and a play button. When I click the play button the video plays. Is it possible to show a screen-shot of the video with the play button on top, instead of the blank screen? This is the code of my player: <embed id="fl" src="flvplayer.swf" bgcolor="#FFFFFF" align="left" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" flashvars="file=hospitaldemo.swf&autostart=false&frontcolor=0xCCCCCC&backcolor=0x000000&lightcolor=0x996600&showdownload=false&showeq=false&repeat=false&volume=100&useaudio=false&usecaptions=false&usefullscreen=true&usekeys=true" style="width: 400px; height: 350px"> </embed>

    Read the article

  • avmutablecomposition insertEmptyTimeRange

    - by smartfaceweb
    I have created an avmutablecomposition and tried to use insertEmptyTimeRange to generate 1 minute of silence. This doesn't appear to be working. I have also tried creating an avmutablecompositiontrack using addMutableTrackWithMediaType:preferredTrackID: and then insertEmptyTimeRange on the track and still no success. To give some background on my app, I allow users to add audio samples a timeline and then playback or export and this is working really well using the av classes. The problem is that I need to make sure that the audio is exactly 1 min (for example). Regardless of the info about my specific app above, is it possible to insert an empty time range into a comp or comptrack?

    Read the article

  • html5 video player with simplest controls (only play and paus)

    - by mathiregister
    hi guys, somehow there are really little tutorials out there for html5 video and audio playback. I simply want to embed video and audio files with customized controls. However the controls should be farely simple. I only need a play-button. If clicked, play gets replaced by pause. that's all! however i even don't know how to embed/display a video without "preload controls". Somehow if i only set (without preload controls) Firefox even don't shows anything. Chrome does show a black window. I would love to be able to use jquery to control the video play and pause button. Maybe you have some little start-approach for me! thank you very much!

    Read the article

  • The right way to delete file to trash in Snow Leopard using Cocoa ?

    - by Irwan
    I mean the right way must able to "Put Back" in Finder and isn't playing sound Here are the methods I tried so far: NSString * name = @"test.zip"; NSArray * files = [NSArray arrayWithObject: name]; NSWorkspace * ws = [NSWorkspace sharedWorkspace]; [ws performFileOperation: NSWorkspaceRecycleOperation source: @"/Users/" destination: @"" files: files tag: 0]; Downturn : can't "Put Back" in Finder OSStatus status = FSPathMoveObjectToTrashSync( "/Users/test.zip", NULL, kFSFileOperationDefaultOptions ); Downturn : can't "Put Back" in Finder tell application "Finder" set deletedfile to alias "Snow Leopard:Users:test.zip" delete deletedfile end tell Downturn : playing sound so it's annoying if I execute it repeatedly

    Read the article

  • Multimedia content in REST responce(XML/JSON)

    - by Koushik
    In my thesis I need to test different architectures. A request to a REST web service developed using Apache CXF and Spring MVC with MySQL as back end serving references(a field in database) to images,audio and video files stored in file system. In the response message, what is the best method to send the content to the client(another application using the service which I developed). URI: http://www.filmservices.com/film/{id} A client here is not the end user. Send the encoded hyperlink's(where the content is stored in the file system) to the client, so that the client renders the response and displays it to the browser. Use Base64 to encode the message(image,audio,video) and send it to the client. Main concern is performance.

    Read the article

  • VLC helper protocol on Mac OS X

    - by Preben
    Hey everybody, I am trying to add a vlc:// helper protocol on Mac OS X. To register the protocol, I have unsuccessfully been playing around with the MoreInternet PrefPane. What I want to have in my browser is a vlc://someressource.com/audio.mp3, which should launch VLC and add http://someressource.com/audio.mp3 to the playlist (this works fine on Windows and also Linux if I remember correctly). Maybe even just have vlc://http:// so that https would also be supported. I have no idea how to achieve this. I tried making a bash script, which MoreInternet would not accept. Then I tried making an application through Automator with my Bash script embedded. That did not work either, as the Automator application has no "creator code" - whatever that is?! Can any of you guys point me in the right direction? Thanks in advance!

    Read the article

  • What happens to a class in PHP once its been instantiated?

    - by Caylem
    Hi I'm just playing around with some PHP and was wondering what happens when an object from a class is created within another PHP script? I assume once its created and been processed their is no way of then going back and 'playing' around with it from another script? The idea is i'm trying to create a kind of deck of cards using a card class, each card has specific data that is added to each individual object to make it unique, suit, value etc. Once its created i need to be able to go back to specific cards to use them. In java i'd have an arraylist of card objects, i'm not sure how to approach the same area in PHP. Thanks.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >