Search Results

Search found 4165 results on 167 pages for 'pulse audio'.

Page 24/167 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • I just don't get AudioFileReadPackets

    - by Eric Christensen
    I've tried to write the smallest chunk of code to narrow down a problem. It's now just a few lines and it doesn't work, which makes it pretty clear that I have a fundamental misunderstanding of how to use AudioFileReadPackets. I've read the docs and other examples online, and apparently I'm just not getting. Could you explain it to me? Here's what this block should do: I've previously opened a file. I want to read just one packet - the first one of the file - and then print it. But it crashes on the AudioFileReadPackets line: AudioFileID mAudioFile2; AudioFileOpenURL (audioFileURL, 0x01, 0, &mAudioFile2); UInt32 *audioData2 = (UInt32 *)malloc(sizeof(UInt32) * 1); AudioFileReadPackets(mAudioFile2, false, NULL, NULL, 0, (UInt32*)1, audioData2); NSLog(@"first packet:%i",audioData2[0]); (For clarity, I've stripped out all error handling.) It's the AFRP line that crashes out. (I understand that the third and fourth argument are useful, and in my "real" code, I use them, but they're not required, right? So NULL in this case should work, right?) So then what's going on? Any guidance would be much appreciated. Thanks.

    Read the article

  • record output sound in python

    - by aaronstacy
    i want to programatically record sound coming out of my laptop in python. i found PyAudio and came up with the following program that accomplishes the task: import pyaudio, wave, sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = sys.argv[1] p = pyaudio.PyAudio() channel_map = (0, 1) stream_info = pyaudio.PaMacCoreStreamInfo( flags = pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, channel_map = channel_map) stream = p.open(format = FORMAT, rate = RATE, input = True, input_host_api_specific_stream_info = stream_info, channels = CHANNELS) all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) stream.close() p.terminate() data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() the problem is i have to connect the headphone jack to the microphone jack. i tried replacing these lines: input = True, input_host_api_specific_stream_info = stream_info, with these: output = True, output_host_api_specific_stream_info = stream_info, but then i get this error: Traceback (most recent call last): File "./test.py", line 25, in data = stream.read(chunk) File "/Library/Python/2.5/site-packages/pyaudio.py", line 562, in read paCanNotReadFromAnOutputOnlyStream) IOError: [Errno Not input stream] -9975 is there a way to instantiate the PyAudio stream so that it inputs from the computer's output and i don't have to connect the headphone jack to the microphone? is there a better way to go about this? i'd prefer to stick w/ a python app and avoid cocoa.

    Read the article

  • getAudioInputStream can not convert [stereo, 4 bytes/frame] stream to [mono, 2 bytes/frame]

    - by brian_d
    Hello. I am using javasound and have an AudioInputStream of format PCM_SIGNED 8000.0 Hz, 16 bit, stereo, 4 bytes/frame, little-endian Using AudioSystem.getAudioInputStream(target_format, original_stream) produces an 'IllegalArgumentException: Unsupported Conversion' when the target_format is PCM_SIGNED 8000.0 Hz, 16 bit, mono, 2 bytes/frame, little-endian Is it possible to convert this stream manually after every read() call? And if yes, how? In general, how can you compare two formats and tell if a conversion is possible?

    Read the article

  • GStreamer record iradio-mode artifacts

    - by Kanzeon
    I'm trying to record internet radio while listen it. I use the following line, but comes to my attention that when I set the iradio-mode true some noises comes in the recorded file, not in the playback. Without iradio-mode, all is ok. But in my app I need this mode to get the title message. gst-launch souphttpsrc location="<radio channel>" iradio-mode=true ! tee name=t ! queue ! decodebin2 ! audioconvert ! audioresample ! osxaudiosink t. ! queue ! filesink location=rectest.mp3

    Read the article

  • AudioTrack skipping after pause and resume

    - by Markus Drösser
    Hi, here is the problem. I play a wav file that i recorded earlier without problems. but when i call audiotrack.pause() and audiotrack.start() again after some waiting, it skips some frames of the file. why is that? here is my play listener // Start playback audioTrack.setPlaybackPositionUpdateListener(new OnPlaybackPositionUpdateListener() { @Override public void onPeriodicNotification(AudioTrack track) { try { if(ramfile!=null && ramfile.read(buffer)==-1) { audioTrack.release(); audioTrack = null; ramfile.close(); playing=false; } else { audioTrack.write(buffer, 0, buffer.length); } } catch (IOException e) { try { ramfile.close(); playing=false; } catch (IOException e1) { } } } @Override public void onMarkerReached(AudioTrack track) { playing=false; track.release(); } });

    Read the article

  • Un groupe de développeurs sort Flac.js, un décodeur JavaScript pour la lecture du contenu audio dans le navigateur sans recours aux codecs

    Un groupe de développeurs sort Flac.js un décodeur audio en JavaScript pour la lecture du contenu audio dans le navigateur sans nécessiter de codecs HTML5, le futur standard du Web introduit la balise audio permettant de créer des applications fournissant le traitement et la synthèse audio dans le navigateur. Les navigateurs récents comme Chrome ou Firefox, intègrent déjà des bibliothèques Javascript qui fournissent des méthodes et propriétés permettant de manipuler l'élément audio. Cependant, les applications HTML 5 manipulant du contenu audio qui fonctionnent normalement dans un navigateur sur un système d'exploitation donné pourraient ne pas marcher correctement lors de...

    Read the article

  • Audio doesn't work on Windows XP guest (WS 7.0)

    - by Mads
    Hi, I can't get audio to work with on a Windows XP guest running on VMware Workstation 7.0 and Ubuntu 9.10 host. Windows fails to produce any audio output and the Windows device manager says the Multimedia Audio Controller is not working properly. Audio is working fine in the host OS. When I open Multimedia Audio Controller properties it says: Device status: The drivers for this device are not installed (Code 28) If I try to reinstall the driver I get the following error message: "Cannot Install this Hardware There was a problem installing this hardware: Multimedia Audio Controller An Error occurred during the installation of the device Driver is not intended for this platform" Has anyone else experienced this problem?

    Read the article

  • Bsplayer - load audio tracks from external files

    - by torran
    I have a movie file: Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 5 frames Muxing mode : Container [email protected] Codec ID : V_MPEG4/ISO/AVC Duration : 54mn 13s Bit rate : 3 380 Kbps Nominal bit rate : 3 459 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 23.976 fps Resolution : 8 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.153 Stream size : 1.28 GiB (88%) Writing library : x264 core 88 r1471 1144615 Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Codec ID : A_AC3 Duration : 54mn 16s Bit rate mode : Constant Bit rate : 384 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Stream size : 149 MiB (10%) and additional audio files in same folder: .mp3 and .ac3. How can I load them with bsplayer? Right click-audio-audio streams is empty. If i open the movie with media players classic I can switch audio files.

    Read the article

  • How do you setup the Audio plugin for Flowplayer?

    - by codeninja
    I'm having a bit of trouble getting the Audio player to work. Basically I want to initiate an mp3 player doing something like this <a href="path-to-my-audio.mp3" id="player" ></a> and then use the $f() call to initate the player. I've followed the instructions here (http://flowplayer.org/plugins/streaming/audio.html) This doesnt seem to be work and I'm not sure what's wrong because I'm able to play videos in this way. Thanks for your help!

    Read the article

  • Core Audio on iPhone - any way to change the microphone gain (either for speakerphone mic or headpho

    - by Halle
    After much searching the answer seems to be no, but I thought I'd ask here before giving up. For a project I'm working on that includes recording sound, the input levels sound a little quiet both when the route is external mic + speaker and when it's headphone mic + headphones. Does anyone know definitively whether it is possible to programmatically change mic gain levels on the iPhone in any part of Core Audio? If not, is it possible that I'm not really in "speakerphone" mode (with the external mic at least) but only think I am? Here is my audio session init code: OSStatus error = AudioSessionInitialize(NULL, NULL, audioQueueHelperInterruptionListener, r); [...some error checking of the OSStatus...] UInt32 category = kAudioSessionCategory_PlayAndRecord; // need to play out the speaker at full volume too so it is necessary to change default route below error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category); if (error) printf("couldn't set audio category!"); UInt32 doChangeDefaultRoute = 1; error = AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof (doChangeDefaultRoute), &doChangeDefaultRoute); if (error) printf("couldn't change default route!"); error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, audioQueueHelperPropListener, r); if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error); UInt32 inputAvailable = 0; UInt32 size = sizeof(inputAvailable); error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable); if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", (int)error); error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, audioQueueHelperPropListener, r); if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error); error = AudioSessionSetActive(true); if (error) printf("AudioSessionSetActive (true) failed"); Thanks very much for any pointers.

    Read the article

  • What is the best service/tool to put short audio clips on a website so users can click and listen im

    - by Edward Tanguay
    I'm making a foreign language flashcard website in which I want to have 100s of short 3-10 second audio files available for users to click and listen. So I am looking for a tool/service such as YouTube or Screenr.com but for audio which e.g.: allows me to easily upload multiple kinds of audio files: mp3, wav, etc. easy to manage them online (delete, replace) has a simple, small player (e.g. flash) that integrates nicely into any site

    Read the article

  • Html5 Audio plays only once in my Javascript code.

    - by Poul
    I have a dashboard web-app that I want to play an alert sound if its having problems connecting. The site's ajax code will poll for data and throttle down its refresh rate if it can't connect. Once the server comes back up, the site will continue working. In the mean time I would like a sound to play each time it can't connect (so I know to check the server). Here is that code. This code works. var error_audio = new Audio("audio/"+settings.refresh.error_audio); error_audio.load(); //this gets called when there is a connection error. function onConnectionError() { error_audio.play(); } However the 2nd time through the function the audio doesn't play. Digging around in Chrome's debugger the 'played' attribute in the audio element gets set to true. Setting it to false has no results. Any ideas?

    Read the article

  • How accurately (in terms of time) does Windows play audio?

    - by MusiGenesis
    Let's say I play a stereo WAV file with 317,520,000 samples, which is theoretically 1 hour long. Assuming no interruptions of the playback, will the file finish playing in exactly one hour, or is there some occasional tiny variation in the playback speed such that it would be slightly more or slightly less (by some number of milliseconds) than one hour? I am trying to synchronize animation with audio, and I am using a System.Diagnostics.Stopwatch to keep the frames matching the audio. But if the playback speed of WAV audio in Windows can vary slightly over time, then the audio will drift out of sync with the Stopwatch-driven animation. Which leads to a second question: it appears that a Stopwatch - while highly granular and accurate for short durations - runs slightly fast. On my laptop, a Stopwatch run for exactly 24 hours (as measured by the computer's system time and a real stopwatch) shows an elapsed time of 24 hours plus about 5 seconds (not milliseconds). Is this a known problem with Stopwatch? (A related question would be "am I crazy?", but you can try it for yourself.) Given its usage as a diagnostics tool, I can see where a discrepancy like this would only show up when measuring long durations, for which most people would use something other than a Stopwatch. If I'm really lucky, then both Stopwatch and audio playback are driven by the same underlying mechanism, and thus will stay in sync with each other for days on end. Any chance this is true?

    Read the article

  • How to programmatically generate an audio podcast file with chapters and text track?

    - by adib
    Hi Anybody know how to programmatically generate audio podcast files with bookmarks that can be used in iTunes / iPod / iPhone / iPod touch? Specifically text bookmarks (bookmarks with titles) that the listener can skip to a specific point in time in the audio file. Also how to add the text transcription of the podcast's content. Even better if you have an example Cocoa code or library to write the audio file. Thanks.

    Read the article

  • how do i merge two audio files and one video file in to a video file using c# ?

    - by wingdings
    i wrote a program in c# using directshow , that captures all devices' audios , and video from single device (webcam or external camera) , now that my requirement is to merge selected audio files with one video file and i can not get it done in c#. so i need a program or libraries that merges one(or several) audio file(s) and one video file and save it as an avi VIDEO file ,, both audio file and video files are in avi format.

    Read the article

  • How to speed up drawing of scaled image? Audio playback chokes during window resize.

    - by Paperflyer
    I am writing an audio player for OSX. One view is a custom view that displays a waveform. The waveform is stored as a instance variable of type NSImage with an NSBitmapImageRep. The view also displays a progress indicator (a thick red line). Therefore, it is updated/redrawn every 30 milliseconds. Since it takes a rather long time to recalculate the image, I do that in a background thread after every window resize and update the displayed image once the new image is ready. In the meantime, the original image is scaled to fit the view like this: // The drawing rectangle is slightly smaller than the view, defined by // the two margins. NSRect drawingRect; drawingRect.origin = NSMakePoint(sideEdgeMarginWidth, topEdgeMarginHeight); drawingRect.size = NSMakeSize([self bounds].size.width-2*sideEdgeMarginWidth, [self bounds].size.height-2*topEdgeMarginHeight); [waveform drawInRect:drawingRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1]; The view makes up the biggest part of the window. During live resize, audio starts choking. Selecting the "big" graphic card on my Macbook Pro makes it less bad, but not by much. CPU utilization is somewhere around 20-40% during live resizes. Instruments suggests that rescaling/redrawing of the image is the problem. Once I stop resizing the window, CPU utilization goes down and audio stops glitching. I already tried to disable image interpolation to speed up the drawing like this: [[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone]; That helps, but audio still chokes during live resizes. Do you have an idea how to improve this? The main thing is to prevent the audio from choking.

    Read the article

  • Linux based audio prodcuction tutorials

    - by thelinuxer
    I have been searching for a while for Linux based audio production tutorials. All I can find is tool based tutorials. For example I found tutorials on how to use jack, ardour, lmms ..etc. What I need is tutorials that teaches professional audio production with opensource/free tools, like those already available for protools and likes. If any one can guide me to any videos/articles available it would be highly appreciated. Thanks.

    Read the article

  • Firefox pour Android introduit la « navigation en tant qu'invité » et le support de l'API Web Audio

    Firefox pour Android introduit la « navigation en tant qu'invité » et le support de l'API Web AudioA la suite de la sortie de Firefox 25, Mozilla a également publié une mise à jour de son navigateur pour les possesseurs de terminaux sous Android.Firefox pour Android hérite de quelques fonctionnalités de version desktop, notamment la prise en charge de l'API Web Audio, une spécification du W3C pour les effets audio avancés à partir de HTML5. Cette nouvelle API permettra, par exemple, aux ingénieurs...

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >