Search Results

Search found 5220 results on 209 pages for 'eric audio'.

Page 39/209 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • What's the difference between Pygame's Sound and Music classes?

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • Routing audio from GSM module to a Bluetooth HandsFree device

    - by Shaihi
    I have a system with the following setup: I use: Windows CE 6 R3 Microsoft's Bluetooth stack including all profiles Motorola H500 The Audio Gateway service is up and running (checked through services list in cmd) GSM Module is functional - I am able to set outgoing calls and to answer calls. Bluetooth is functional - the A2DP profile plays music to Motorola headphones (can't remember the model right now) I want to hold a conversation using a headset device. I have included all Bluetooth components in the catalog. I pair the device using the Control Panel applet. When I press the button on the Motorla device to answer a call I get a print by the Audio Gateway: BTAGSVC: ConnectionEvent. BTAGSVC: SCOListenThread_Int - Connection Event. BTAGSVC: ConnectionEvent. BTAGSVC: SCOListenThread_Int - Connection Event. BTAGSVC: ConnectionEvent. BTAGSVC: A Bluetooth peer device has connected to the Audio Gateway. BTAGSVC: Could not open registry key for BT Addr: 2. BTAGSVC: The peer device was not accepted since the user has never confirmed it as a device to be used. So my questions are as follows: What do I need to do to pair the device with the Audio Gateway? Once my device is paired, do I need to set anything else up? (except for the GSM module of course)

    Read the article

  • Method for launching audio player on Android from web page for streaming media

    - by Brad
    To link to SHOUTcast/HTTP internet radio streams, traditionally you would link to a playlist file, such as an M3U or PLS. From there, the browser would launch the audio player registered to handle the playlist. This works great on any PC, Palm, Blackberry, and iPhone. This method does not work in Android without installing extra software. Sure, Just Playlists or StreamFurious can handle it just fine, but I am assuming there has to be a way to invoke the audio or video player commonly installed by default on Android installations. By default, no audio player is capable of handling M3U or PLS. The player seems to open it, but says "Unsupported Media Type". To make this more annoying, the browser is capable of streaming MP3 audio over HTTP, simply by opening a link to an MP3 file. I have tried simply linking directly to the MP3 stream hosted by SHOUTcast, which should end up in the same result, but SHOUTcast detects "Mozilla" in the user-agent string, and instead of sending the stream, it sends the information page for the station. How should I link to a SHOUTcast stream on Android, from a normal mobile site, without using extra applications?

    Read the article

  • Playing video and audio in iPhone not working...

    - by Scott
    So we have buttons linked up to display images/videos/audio on click depending on a check we do earlier. That part works fine. It knows which one to play, however, when we click the buttons for video and audio, nothing happens. The image one works fine. The video and audio are being taken for a URL online, they are not local, but everywhere said this was still possible. Here is a little snippet of the code where we play the two files: if ( [fName hasSuffix:@".png"]) { NSLog(@"PICTURE"); NSURL *url = [NSURL URLWithString: fName]; UIImage *image = [UIImage imageWithData: [NSData dataWithContentsOfURL:url]]; self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; // self.view.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"MainBG.jpg"]]; [self.view addSubview:[[UIImageView alloc] initWithImage:image]]; } if ( [fName hasSuffix:@".mp4"]) { NSLog(@"VIDEO"); //NSString *path = [[NSBundle mainBundle] pathForResource:fName ofType:@"mp4"]; //NSLog(path); NSURL *url = [NSURL fileURLWithPath:fName]; MPMoviePlayerController *player = [[MPMoviePlayerController alloc] initWithContentURL:url]; [player play]; } if ( [fName hasSuffix:@".mp3"]) { NSLog(@"AUDIO"); NSURL *url = [NSURL fileURLWithPath:fName]; NSData *soundData = [NSData dataWithContentsOfURL:url]; AVAudioPlayer *avPlayer = [[AVAudioPlayer alloc] initWithData:soundData error: nil]; [avPlayer play]; } See anything wrong? By the way it compiles and runs, but nothing happens when we hit the button that executes that code.

    Read the article

  • Visualizing volume of PCM samples

    - by genevincent
    I have several chunks of PCM audio (G.711) in my C++ application. I would like to visualize the different audio volume in each of these chunks. My first attempt was to calculate the average of the sample values for each chunk and use that as an a volume indicator, but this doesn't work well. I do get 0 for chunks with silence and differing values for chunks with audio, but the values only differ slighly and don't seem to resemble the actual volume. What would be a better algorithem calculate the volume ? I hear G.711 audio is logarithmic PCM. How should I take that into account ?

    Read the article

  • General question about DirectShow.NET, DirectShow and Windows Media Format

    - by Paul Andrews
    I searched and googled for an answer but couldn't find one. Basically I'm developing a webcam/audio streaming application which should capture audio and video from a pc (usb webcam/microphone) and send them to a receiving server. What the server will do with that it's another story and phase two (which I'm skipping for now) I wrote some code using DirectShow and Windows Media Format and it worked great for capture audio/video and sending them to another client, but there's a major problem: latency. Everywhere in the internet everyone gave me the same answer: "sorry dude but media format isn't for video conferencing, their codecs have too high latency". I thought I could skip the .wmv problems but seems like it's not possible to do... this road ends here then. So I saw a few examples with DirectShow.NET which were faster for both audio and video.. my question is: how come that DirectShow.NET is faster and better for video/audio conferencing? Shouldn't it be just a .NET porting of C++'s DirectShow? Am I missing something? I'm a bit confused at this point

    Read the article

  • AudioFileWriteBytes fails with error code -40

    - by alexbw
    I'm trying to write raw audio bytes to a file using AudioFileWriteBytes(). Here's what I'm doing: void writeSingleChannelRingBufferDataToFileAsSInt16(AudioFileID audioFileID, AudioConverterRef audioConverter, ringBuffer *rb, SInt16 *holdingBuffer) { // First, figure out which bits of audio we'll be // writing to file from the ring buffer UInt32 lastFreshSample = rb->lastWrittenIndex; OSStatus status; int numSamplesToWrite; UInt32 numBytesToWrite; if (lastFreshSample < rb->lastReadIndex) { numSamplesToWrite = kNumPointsInWave + lastFreshSample - rb->lastReadIndex - 1; } else { numSamplesToWrite = lastFreshSample - rb->lastReadIndex; } numBytesToWrite = numSamplesToWrite*sizeof(SInt16); Then we copy the audio data (stored as floats) to a holding buffer (SInt16) that will be written directly to the file. The copying looks funky because it's from a ring buffer. UInt32 buffLen = rb->sizeOfBuffer - 1; for (int i=0; i < numSamplesToWrite; ++i) { holdingBuffer[i] = rb->data[(i + rb->lastReadIndex) & buffLen]; } Okay, now we actually try to write the audio from the SInt16 buffer "holdingBuffer" to the audio file. The NSLog will spit out an error -40, but also claims that it's writing bytes. No data is written to file. status = AudioFileWriteBytes(audioFileID, NO, 0, &numBytesToWrite, &holdingBuffer); rb->lastReadIndex = lastFreshSample; NSLog(@"Error = %d, wrote %d bytes", status, numBytesToWrite); return;

    Read the article

  • 2 AudioQueue questions

    - by iter
    I am learning to use AudioQueue. I wish to generate an audio stream programmatically. I have 2 issues that I cannot account for. I am getting audio when I run in the simulator, but not on an iPhone. (Other apps do produce sound on the phone). I get about 20ms-long gaps of silence between buffers. In my testing, I generate an audio buffer on startup and repeatedly enqueue it without modification. I don't spend any processing on filling audio buffers at runtime, not even copying them. Ari.

    Read the article

  • Extract wav file from video file

    - by Nikos Steiakakis
    I am developing an application in which I need to extract the audio from a video. The audio needs to be extracted in .wav format but I do not have a problem with the video format. Any format will do, as long as I can extract the audio in a wav file. Currently I am using Windows Media Player COM control in a windows form to play the videos, but any other embedded player will do as well. Any suggestions on how to do this? Thanks

    Read the article

  • Dynamically calculate frequency value.

    - by MS Nathan
    Hi, In my app, I want to find/calculate the audio frequency as dynamically when i am recording an audio and no need to save, play and all. Now i am trying to do that with help of an aurioToch sample code. In that sample, inside FFTBufferManager class methods such as GrabAudioData and ComputeFFT,Here I am not able to find where they are calculating frequency value as dynamically depends on the audio sound and I spent more than 5 days.please help me.

    Read the article

  • How do you control the playback levels (decibles?) using the iPhone AVAudioPlayer? Or do I need to u

    - by Joshua
    My audio clips sound perfect when I upload them to the iPhone via iTunes. And I am pretty sure it is because the iPod has a maximum playback level, so the audio doesn't sound overdriven. In my app, I include the same audio files, and when I play them [myAudio play]; the levels are so high that the audio becomes indiscernible. I found in the library http://developer.apple.com/iphone/library/documentation/AVFoundation/Reference/AVAudioPlayerClassReference/Reference/Reference.html#//apple_ref/doc/uid/TP40008067-CH1-SW2 that it says that you can "Control relative playback level for each sound you are playing" but I've been searching this issue out for hours and I haven't gotten anywhere. Any help would be wonderful!

    Read the article

  • Sound sample recognition library/code

    - by Daniel Mošmondor
    I don't want sound-to-text software. What I need is the following: I'll record multiple (say 50+) audio streams (recordings of radio stations) from that recordings, I'll mark interesting audio clips - their length ranges from 2 to 60 seconds - there will be few thousands of such audio clips library should be able to find other instances of same audio clips from recorded sound streams confidence factor should be reported to used and additional input provided so the recognition could perform better next time Do you know of such software library? LGPL would be most valuable to me, but I can go for commercial license as well.

    Read the article

  • Video/audio streaming does not stop even if UIWebView is closed - iPad

    - by lostInTransit
    Hi I see this issue only on the iPad. The same things works as expected on the iPhone. I am opening the URL from my application in a UIWebView. If the URL is a normal web page, it works fine as expected. But if the URL is that of a remote video/audio file, the UIWebView opens the default player which is again good. Now when I dismiss the UIWebView (by clicking on the Done button on the player), the streaming doesn't stop and the audio/video keeps playing in the background (I cannot see it but it does keep playing in the background, can hear it). The UIViewController in which the webview was created is also dealloced (I put in a log statement in the dealloc method) but the streaming doesn't stop. Can someone please help me out on why this could be happening? And how can I stop the audio/video streaming when the UIWebView is closed? Thanks.

    Read the article

  • How to create an Audio CD using C# or Java

    - by Elie
    I'm looking for an API that would allow me to create an audio CD from within a C# application. The CDs are to be created and closed in the same session (no rewrite required). Basically, my application locates files on behalf of a user, and, if a blank CD is present in the drive, creates an audio CD for the user. If no CD is present, it checks to see if there's a USB drive attached and copies the files there (this part I already know how to do). I would prefer to write this application in either C# or Java, as I'm most comfortable with those, but I don't know how hard it would be to create CDs using either language. There are several other questions here that deal with regular CDs, but I didn't see any discussing audio CDs.

    Read the article

  • Recording Audio through RTMP/Rails

    - by Lowgain
    I am in the process of building a rails/flex application which requires audio to be recorded and then stored in our amazon s3 account. I have found no alternative to using some form of RTMP server for recording audio through flash, but our hosting environment will not allow us to install anything like FMS, Red5, etc. Is there any existing Ruby/Rails RTMP solution that will allow audio recording? If not, is it possible for Rails to at least intercept the RTMP stream and then I can hope to reference red5 or something for parsing the data (long shot, I know)? The other alternative I can think of is hosting a red5 server on another host and communicating with our rails app once the saving/uploading is done, which is not preferred. Am I going to have any luck here?

    Read the article

  • Quicktime Audio Extraction for Compressed Movies

    - by Noorul
    Hi all, I am trying to extract audio from Quicktime Movie. I followed the steps in http://developer.apple.com/quicktime/audioextraction.html. It works fine.But When i try to extract any movies which is compressed(Audio is compressed as AAC), it gives a first chance exception.. In callstack it shows CoreAudioToolbox.dll... If is do continue it renders the audio without any issues. In Mac, this works without any issues. Is this really anything to worry about... I am a QT Beginner. Please help me My QT version is 7.6.7(1675) I am using Windows7

    Read the article

  • Audio input via HTML5?

    - by tibbon
    We have a VoIP application imVOX, and we are looking at various ways of expanding our reach. Part of that is writing an HTML5 application, but it requires the use of audio input from the browser (and also push to talk buttons from the browser, even if another app is focused). On the audio side, with HTML5 is there any way of taking audio input from the browser to compress and send to our servers? I know with Flash such is possible, but we're trying to avoid flash for mobile compatibility and generally looking toward the future.

    Read the article

  • iPhone audio Filter Question

    - by Joe
    Okay, I am going to try to make this totally not a "plz send teh codez kthxbai" I am considering an app which takes sound (eventually an audio track) and applies an audio filter to it. So I can play sounds with AudioServicesPlaySystemSound via AudioToolbox framework just fine. What I need is a very simple example of how I might take a sound and apply (for instance) midrange boost etc. Actually the kind of alteration is irrelevant -- if I can get my head around how the alteration is done I can figure out the rest. I am just finding both docs and examples of altering audio in code to be very scarce. Thanks for any help!

    Read the article

  • Legality of ripping DVD audio

    - by Smashery
    I want to buy several live music DVDs, but I also want to be able to put the music in my playlists. It seems ridiculous that I should also have to buy the Live CD version. Is it legal (particularly under Australian law) to rip the audio?

    Read the article

  • install pymedia and python audio tools

    - by aaron
    I noticed a pattern of errors while trying to install PyMedia and Python Audio Tools. For both modules I run the following: $ python setup.py install Then I get a series of compilation errors, and then this: lipo: can't figure out the architecture type of: /var/folders/Kx/Kxxj4868HGi6VMhZLPyZN++++TI/-Tmp-//cch1y9AO.out error: command '/usr/bin/gcc-4.2' failed with exit status 1 I'm running Mac OS X 10.5, and this happens whether I'm using gcc-4.0 or gcc-4.2, Mac-Python 2.5 or 2.6, and MacPorts-Python 2.6. What's going on?

    Read the article

  • Dell Vostro 1000 audio and wireless NIC drivers

    - by Ssvarc
    I have a Dell Vostro 1000 laptop with a fresh XP Pro install. I'm trying to track down the correct audio and wireless NIC drivers. I've used this Dell support driver page, but the drivers it gives for the service tag and XP OS don't work. Anyone have links to the correct, working drivers?

    Read the article

  • Alternatives to musicbrainz picard -audio-fingerprinting tagging services [closed]

    - by Journeyman Geek
    Possible Duplicate: Auto-tagging MP3s I'm currently using musicbranz picard to tag files that don't have any usable tagging information. For some reason, currently none of the files I trying to ID seem to be identified, despite some of them them being artists who are almost certainly on musicbrainz. So, I'm looking for something that uses an alternate database and does the same thing - to identify unknown music files off an audio fingerprint. Windows is preferred, but I would be fine with any of the big 3 OSes (Windows/Linux/OS X).

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >