Search Results

Search found 4010 results on 161 pages for 'audio fingerprinting'.

Page 56/161 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Creating a music catalog in C# and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs. Thanks, Rad

    Read the article

  • I write bad wave files using Java

    - by Cliff
    I'm writing out wave files in Java using AudioInputStream output = new AudioInputStream(new ByteArrayInputStream(rawPCMSamples), new AudioFormat(22000,16,1,true,false), rawPCMSamples.length) AudioSystem.write(output, AudioFileFormat.Type.WAVE, new FileOutputStream('somefile.wav')) And I get what appears to be corrupt wave files on OSX. They won't play from Finder however using the same code behind a servlet writing directly to the response stream and setting the Content-Type to audio/wave seems to play fine in quicktime. What gives?

    Read the article

  • HOW-TO Make computer sing

    - by Ofir
    Hi, I'm trying to develop an online application where the user writes some text and the software sings it back to the user. I can currently generate the audio file with the words spoken by the computer using espeak, but I have no idea how to make it sound like a song, how to add rhythm to it. I'm able to change the pitch and tempo using rubberband, but that's as far as I've gotten. Does anyone have a clue how to make this happen?

    Read the article

  • calculating time duration of a file

    - by RV
    Dupe of calculate playing time of a .mp3 file im reading a audio file(for ex:wav,mp3 etc) and get a long value as duration.now i want to convert that long value into correct time duration(like,00:05:32)

    Read the article

  • How to programmatically generate an MP3 podcast file with chapters and text track?

    - by adib
    Hi Anybody know how to programmatically generate MP3 files with bookmarks that can be used in iTunes / iPod / iPhone / iPod touch? Specifically text bookmarks (bookmarks with titles) that the listener can skip to a specific point in time in the audio file. Also how to add the text transcription of the podcast's content. Even better if you have an example Cocoa code or library to write the MP3 file. Thanks.

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • Simple wave generator with SDL in c++

    - by Vlad Popescu
    i am having problems understanding how the audio part of the sdl library works now, i know that when you initialize it, you have to specify the frequency and a callback<< function, which i think is then called automatically at the given frequency. can anyone who worked with the sdl library write a simple example that would use sdl_audio to generate a 440 hz square wave (since it is the simplest waveform) at a sampling frequency of 44000 hz? thanks in advance

    Read the article

  • Best way to play wav files in the browser?

    - by Splatzone
    I have no choice but to play wav files directly in the browser (serverside encoding to mp3 isn't an option, unfortunately.) What's the best way to do this? I'd really like to take advantage of the HTML 5 audio tag but my target audience includes many, many teens using IE6. As far as I'm aware flash isn't an option, but speedy playback really is critical. Thanks.

    Read the article

  • playing two streams on android

    - by Yanush
    Hello, I'm looking for a way (or at least to be pointed in the right direction) to play two streams of audio on android simultaneously but each on a different channel (e.g one in the speaker and one through the headphones) I'm not even sure its possible hardware wise. Any thoughts, clues ? Y

    Read the article

  • how to get the css keys and values for any html tag

    - by artsince
    I would like to dump all css key/value pairs for an html tag. In particular, I would like to learn the css properties for <audio> tag, so I can try to customize the look. document.getElementById('myaudio').style returns a CSSStyleDeclaration object but length returns 0 and I cannot figure out to iterate over the key/value pairs. Thank you

    Read the article

  • Mixing two wav music files of different size

    - by iphoneDev
    Hi, I want to mix audio files of different size into a one single .wav file. There is a sample through which we can mix files of same size [(http://www.modejong.com/iOS/#ex4 )(Example 4)]. I modified the code to get the mixed file as a .wav file. But I am not able to understand that how to modify this code for unequal sized files. If someone can help me out with some code snippet,i'll be really thankful.

    Read the article

  • MP3 and OGG tags in PHP

    - by Quamis
    Except http://us3.php.net/manual/en/book.ktaglib.php and http://getid3.sourceforge.net/ does anyone know of any other way to work from PHP with tags on audio files? I need to read and write them, and KTagLib seems a little too much for the job, and also don't really get the documentation, and getID3 seems to only write ID3v1 tags.

    Read the article

  • How do I tell if the master volume is muted?

    - by John_Sheares
    I am using the following to mute/unmute the master audio on my computer. Now, I am looking for a way to determine the mute state. Is there a just as easy way to do this in C#? private const int APPCOMMAND_VOLUME_MUTE = 0x80000; private const int WM_APPCOMMAND = 0x319; [DllImport("user32.dll")] public static extern IntPtr SendMessageW(IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam);

    Read the article

  • Java writes bad wave files

    - by Cliff
    I'm writing out wave files in Java using AudioInputStream output = new AudioInputStream(new ByteArrayInputStream(rawPCMSamples), new AudioFormat(22000,16,1,true,false), rawPCMSamples.length) AudioSystem.write(output, AudioFileFormat.Type.WAVE, new FileOutputStream('somefile.wav')) And I get what appears to be corrupt wave files on OSX. They won't play from Finder however using the same code behind a servlet writing directly to the response stream and setting the Content-Type to audio/wave seems to play fine in quicktime. What gives?

    Read the article

  • Play an AudioBufferSourceNode twice?

    - by alltom
    Should I be able to use the same AudioBufferSourceNode to play a sound multiple times? For some reason, calling noteGrainOn a second time doesn't play audio, even with an intervening noteOff. This code only plays the sound once: var node = audioContext.createBufferSource() node.buffer = audioBuffer node.connect(audioContext.destination) var now = audioContext.currentTime node.noteGrainOn(now, 0, 2) node.noteOff(now + 2) node.noteGrainOn(now + 3, 0, 2) node.noteOff(now + 5)

    Read the article

  • how to save Audiorecorded file to another location? i m trying but i got exception...

    - by rakesh-bhatt99
    NSString recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent: (NSString)inRecordFile]; NSArray *docPaths=NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES); NSString docDir=[[docPaths objectAtIndex:0]stringByAppendingPathComponent: (NSString)inRecordFile]; url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)docPaths, NULL); // create the audio file XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile, &mRecordFile), "AudioFileCreateWithURL failed"); CFRelease(url);

    Read the article

  • Access to iTunes Sound Check Results on iPhone

    - by Baldoph
    I would like to propose to the user some songs whose volume doesn't exceed a certain level. Is there any way to access to the results of the 'Sound Check' option, from the iPhone ? If not, do you know if I can calculate that with the audio tools in the iPhone SDK ? Thanks a lot.

    Read the article

  • Mobile opera have background sound support?

    - by Mark
    I make browser/html/js games. One of my biggest pains in the arse is the lack of background sound support in mobile safari. This lack of support makes high value games pretty much impossible. Does anyone know if opera mini supports html5 audio, or any mobile browser for that matter. If not, what are some alternatives methods.

    Read the article

  • AVFoundation: Video to OpenGL texture working - How to play and sync audio?

    - by j00hi
    I've managed to load a video-track of a movie frame by frame into a OpenGL texture with AVFoundation. I followed the steps described in the answer here: iOS4: how do I use video file as an OpenGL texture? and took some code from the GLVideoFrame sample from WWDC2010 which can be downloaded here: http://bit.ly/cEf0rM How do I play the audio-track of the movie synchronously to the video. I think it would not be a good idea to play it in a separate player, but to use the audio-track of the same AVAsset. AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]; I retrieve a videoframe and it's timestamp in the CADisplayLink-callback via CMSampleBufferRef sampleBuffer = [self.readerOutput copyNextSampleBuffer]; CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer ); where readerOutput is of type AVAssetReaderTrackOutput* How to get the corresponding audio-samples? And how to play them? Edit: I've looked around a bit and I think, best would be to use AudioQueue from the AudioToolbox.framework using the approach described here: AVAssetReader and Audio Queue streaming problem There is also an audio-player in the AVFoundation: AVAudioPlayer. But I don't know exactly how I should pass data to it's initWithData-initializer which expects NSData. Furthermore I don't think it's the best choice for my case because a new AVAudioPlayer-instance would have to be created for every new chunk of audio samples, as I understand it. Any other suggestions? What's the best way to play the raw audio samples which i get from the AVAssetReaderTrackOutput?

    Read the article

  • issue getting dynamic Config parameter in Grails taglib

    - by Mick Knutson
    I have a dynamic config parameter I want to get like: String srcProperty = "${attrs ['src']}.audio" + ((attrs['locale'])? "_${attrs['locale']}" : '') assert srcProperty == "prompt.welcomeMessageOverrideGreeting.audio" where my config has: prompt{ welcomeMessageOverrideGreeting { audio = "/en/someFileName.wav" txt = "Text alternative for /en/someFileName.wav" audio_es = "/es/promptFileName.wav" txt_es = "Texto alternativo para /es/someFileName.wav" } } While this works fine: String audio = "${config.prompt.welcomeMessageOverrideGreeting.audio}" and: assert "${config.prompt.welcomeMessageOverrideGreeting.audio}" == "/en/someFileName.wav" I can not get this to work: String audio = config.getProperty("prompt.welcomeMessageOverrideGreeting.audio")

    Read the article

  • The fastest way to encode image+audio for Youtube from command line?

    - by Pavel Vlasov
    I have an mp3 and image and I want to make a simple clip to upload onto Youtube. Is there a fast solution? If video formats are so bad designed, then maybe it is possible to use a prerendered video-only clip? This works good except it takes as much time as the audio lasts: ffmpeg -loop_input -r ntsc -i "%IMAGE%" -i "%AUDIO%" -r 1 -acodec copy -shortest -re -force_fps "%VIDEO%" This takes a second but results in a black screen video that is successfully played by a desktop video player but not acceptable by Youtube: ffmpeg -i "%IMAGE%" -i "%AUDIO%" -acodec copy "%VIDEO%" Windows 7. Preserving audio quality is preferred over video quality.

    Read the article

  • How do I merge MP4 files without audio going out of sync?

    - by djangofan
    Is there a tool I can use that can merge MP4 files without throwing the audio out of sync? I generated some MP4 files from a DVD using AVIDemux but whatever tool I try to use always ends up throwing the audio out of sync with the video. The further you get into the video the further off-sync the audio is. By themselves the MP4/AAC videos have perfect audio-video sync. later tonight i might try http://www.headbands.com/gspot/ to examine the file before and after to see if anything changed in the media format.

    Read the article

  • How do I add another audio stream to an MP4 file?

    - by RandomEngy
    I've got an MP4 video file and I want to add another AAC audio track to it. I've tried YAMB and MeGUI (frontends for MP4Box) and it plays correctly in Zoom Player, but it picks the wrong track in WMP and plays both at once in Quicktime. I think this might have to do with designating the default audio track somehow. Does anyone know how to specify the default audio track with YAMB/MeGUI or know of another way of adding a track to an MP4 file?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >