Search Results

Search found 4165 results on 167 pages for 'pulse audio'.

Page 11/167 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Audio problems with Windows 7 running through Bootcamp

    - by cust0s
    I've been using (and enjoy using) Windows 7 through bootcamp on my mac for some time now. The only problem I've been having is with the audio, I seem to get a low hissing noise which is more prevalent when wearing headphones. Whilst it's not stopping me from using Windows 7 it's something I'd like fixed. I've tried downloading the latest drivers from Realteks website, but to no avail. Any suggestions?

    Read the article

  • Sony Vegas: minimal audio glitches

    - by Fuxi
    When doing video-editing with Sony Vegas, I noticed that it kind of produces minimal audio glitches very often. With glitch, I mean a copied fragment of approximately 1/2 second, as if I copied it and directly pasted it again. I'm assuming it must be something with puffers. Does anyone know of this problem? I have a pretty fast Windows 7 system with fast SATA drives.

    Read the article

  • Audio problems with Windows 7 running through Bootcamp

    - by cust0s
    I've been using (and enjoy using) Windows 7 through bootcamp on my mac for some time now. The only problem I've been having is with the audio, I seem to get a low hissing noise which is more prevalent when wearing headphones. Whilst it's not stopping me from using Windows 7 it's something I'd like fixed. I've tried downloading the latest drivers from Realteks website, but to no avail. Any suggestions?

    Read the article

  • n68 s-ucc front panel audio

    - by user264522
    I have a motherboard asrock n68 s-ucc, and i need a hand identifying which pins on the board to connect my front audio header connectors to. On the connectors I have: double connector: GND and MIC IN double connector: LINE OUT FL and LINE OUT RL double connector: LINE OUT FR and LINE OUT RR simple connector: MIC POWER On the motherboard I have: GND PRESENCE# MIC_RET OUT_RET MIC2_L MIC2_R OUT2_R J_SENSE OUT2_L manual of motherboard and my front panel Thanks so much to all

    Read the article

  • Dell 4600 Driver for Audio Card Windows 7

    - by Littlet-ENG
    I just installed windows 7 on dell 4600 desktop. the Audio card is a Creative Labs Sound Blaster Audigy 2. I've look all over, and I can't find a driver for windows 7. I've also looked at Vista Drivers. It seems Dell is not supporting Windows 7 with this computer. I've got a work around, but would like a driver if possible. Any help would be appreciated.

    Read the article

  • Split audio into tracks?

    - by Mark
    I've recorded some music from an internet radio stream. I want to split it into separate audio tracks, preferably automatically (where ever there's a pause, or using some other clever algo). Anyone know of some free software that can do this?

    Read the article

  • windows live playback left and right audio channel

    - by user1254761
    I have a multichannel (4x stereo) audiocard (m-audio delta1010lt) and want to playback /playthru some of the channels live. But I am only able to playback/playthru the left channel on each stereo-input (CH1, CH3, CH5, CH7). For CH2,CH4,CH6,CH8 I see the Windows Volume-Indicator going up and down in the Windows Record-Audiosettings but I don't hear any playback sound. Is there a way to playback/playthru all input channels?

    Read the article

  • Audio organizing via CLI

    - by Radek Šimko
    I'm looking for some software for my OpenSUSE, which with I would be able to organize my audio files. I've found one, which may be good, but it's unable to run without X server (in CLI). http://musicbrainz.org/doc/MusicBrainz_Picard I'm not looking for ID3 renamers. There're maybe hundreds of them... I'm looking for software, which has its own database, or is able to communicate with some database, like CDDB, Gracenote, last.fm etc.

    Read the article

  • Linux: grab audio from a video clip

    - by liori
    Hello, I'd like to take audio track from a video clip in FLV container and save it to something playable by portable music players. Are there any easy to use tools for that? I know how to do that using console tools (mplayer+lame/oggenc), but I'd like to get something clickable, preferably for GNOME. Thanks!

    Read the article

  • Serving a video and audio upload and streaming intense site

    - by Pollux Khafra
    I'm about to launch a new site that allows user to both upload/stream audio and video and I don't know anything about the server side of things. My original plan was to just use a dedicated server through Hostgator but from what I'm reading, Cloud hosting or Load balanced clustered is the best way to go for what Im trying to do. All the articles seem to have an agenda to sell you on an affiliate web host so how do I really need to do this?

    Read the article

  • MPlayer refuses to generate mono wav file

    - by JCCyC
    I want to downsample an existing audio file to 8KHz mono. This command line downsamples it to stereo: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1 -ao pcm:waveheader:file="/tmp/blah1.wav" ~/from_my_cellphone.3ga It generates a file that the file utility identifies as stereo: $ file /tmp/blah1.wav /tmp/blah1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 8000 Hz Now, if I read the documentation correctly, I should add pan=1:0.5:0.5 so I get a file that's half the size: mplayer -quiet -vo null -vc dummy -af volume=0,resample=8000:0:1:pan=1:0.5:0.5 -ao pcm:waveheader:file="/tmp/blah2.wav" ~/from_my_cellphone.3ga But it doesn't! blah2.wav is identical to blah1.wav! What am I doing wrong?

    Read the article

  • Reading audio with Extended Audio File Services (ExtAudioFileRead)

    - by Paperflyer
    I am working on understanding Core Audio, or rather: Extended Audio File Services Here, I want to use ExtAudioFileRead() to read some audio data from a file. This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer). As soon as I use more than one AudioBuffer, ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the audioBufferList. I can not use one huge buffer because then, dataByteSize would overflow its UInt32-integer range with huge files. Here is the code to create the audioBufferList: AudioBufferList *audioBufferList; audioBufferList = malloc(sizeof(AudioBufferList) + (numBuffers-1)*sizeof(AudioBuffer)); audioBufferList->mNumberBuffers = numBuffers; for (int bufferIdx = 0; bufferIdx<numBuffers; bufferIdx++ ) { audioBufferList->mBuffers[bufferIdx].mNumberChannels = numChannels; audioBufferList->mBuffers[bufferIdx].mDataByteSize = dataByteSize; audioBufferList->mBuffers[bufferIdx].mData = malloc(dataByteSize); } UInt32 numFrames = fileLengthInFrames; error = ExtAudioFileRead(extAudioFileRef, &numFrames, audioBufferList); Do you know what I am doing wrong here?

    Read the article

  • iPhone Audio Queue Service sample units

    - by pion
    I am looking at Audio Queue Services document specifically on the following code: // Writing an audio queue buffer to disk AudioFileWritePackets ( // 1 pAqData->mAudioFile, // 2 false, // 3 inBuffer->mAudioDataByteSize, // 4 inPacketDesc, // 5 pAqData->mCurrentPacket, // 6 &inNumPackets, // 7 inBuffer->mAudioData // 8 ); inBuffer-mAudioDataByteSize is the number of bytes of audio data being written. inBuffer-mAudioData is the new audio data to write to the audio file. Assuming the sample rate is 44100. AudioStreamBasicDescription mDataFormat; mDataFormat.mSampleRate = 44100.0f; mDataFormat.mBitsPerChannel = 16; ... NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2; SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData; I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice. Beginner questions: What does the audioSample represent? What am I looking at here? What is the unit of audioSample? What do I need to do if I just want to plot the range between 50 - 100 Hz? Thanks in advance for your help.

    Read the article

  • Progressive download using Matt Gallagher's audio streamer

    - by Fernando Valente
    I'm a completely n00b when talking about audio. I'm using Matt Gallagher's audio streamer on my radio app. How may I use progressive download? Also, ExtAudioFile is a good idea too :) Edit: Used this: length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize); if(!data) data =[[NSMutableData alloc] initWithLength:0]; [data appendData:[NSData dataWithBytes:bytes length:kAQDefaultBufSize]]; Now I can save the audio data using writeToFile:atomically: NSData method, but the audio won't play. Also, if I try to load it on a AVAudioPlayer, I get an error.

    Read the article

  • audio processing in iPhone

    - by Janaka
    I am writing an iPhone application to apply filters to audio input and output the result in real time. I am new to audio processing but using audiounit, the correct approach? I found out how to output data using audiounit but couldn’t figure out how to capture input audio. Is there a sample application showing how to connect input and output using audiounit?

    Read the article

  • options for producing audio with GWT

    - by Kaffeine Coma
    What options are there for producing audio in a GWT app? I'm thinking of making a simple game, but I'm disappointed to see that there's still not much progress on audio support directly in GWT (yes, I realize that's largely due to lack of underlying browser support; looking forward to HTML5!) This blog post says that "audio support in GWT is rapidly evolving", yet I don't see updates in over a year, at least not at that site. It seems these are the available options: GWT Voices GWT SoundManager GWT Sound GWT Incubator I believe most of these (all of them?) rely on Flash to produce audio. I'm most inclined to go with the GWT Incubator, as that's where features slated for inclusion in GWT get started, but I've no real recommendations to go on. I would appreciate hearing about your experiences with any of these libraries, thanks.

    Read the article

  • html5 audio player - jquery toggle click play/pause???

    - by mathiregister
    hello guys, i wonder what i'm doing wrong? $('.player_audio').click(function() { if ($('.player_audio').paused == false) { $('.player_audio').pause(); alert('music paused'); } else { $('.player_audio').play(); alert('music playing'); } }); i can't seem to start the audio track if i hit the "player_audio" tag. <div class='thumb audio'><audio class='player_audio' src='$path/$value'></audio></div> any idea what i'm doing wrong or what i have to do to get it working?

    Read the article

  • help me pick the right iPhone audio class - MPMoviePlayer vs AVAudioPlayer vs MPMusicPlayer

    - by huevos de oro
    Does anyone know of a good tutorial on the distinction between the MPMoviePlayer vs AVAudioPlayer vs MPMusicPlayer? I want to play audio from an mp3 file available at an external URL. Ideally it is played in an iPod-like audio view. I toyed with MPMoviePlayer but it appears to be more suitable for video, as when audio starts a "movie playing" message displays, the controls disappear and a white quicktime splash page displays. I would like the standard ipod audio controls to display all the time, and to customize the image behind them.

    Read the article

  • Fake "user initiated" <audio> tag on iPad

    - by Alex Ford
    I know that Apple's docs say that an mp3 within an <audio tag on iPhone OS can't be played without user intervention (they cite bandwidth concerns, totally reasonable). However, has anyone succeeded in faking a user action to play the audio? Perhaps faking events to off screen native audio controls with JavaScript? I'm using jPlayer right now which works great on desktop Safari, but is silent on my iPad. I'm prototyping a touch interface using WebKit on the iPad, and audio is an integral part of the experience, so yes, I do have a good reason to want to override this convention. I'd appreciate any help. Thanks.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Audio processing in C# or C++

    - by melculetz
    Hi, I would like to create an application that uses AI techniques and allows the user to record a part of a song and then tries to find that song in a database of wav files. I would have liked to use some already existing libraries for the audio processing part. So, could you recommend any libraries in C# which can read a wav file, get input from microphone, have some audio filters (low pass, high pass, FFT etc) and maybe have the ability to plot the audio signal as well. I would prefer to develop in C#, but if there aren't good libraries for audio processing, I guess I could work in C++ as well. As far as I know, Mathlab already has the above mentioned functionalities, but I can't use it in my application.

    Read the article

  • Howto play video with external audio in Silverlight?

    - by Fury
    Hi all, Is there any proper method to play synchronously video and external audio, other than simply having two MediaElement (one for video source and one for audio) started simultaneously? I need to play video with different soundtracks, but I belive that just two separated MediaElements will be out of sync at some point of time. Maybe there is some way to add audio source to the existing MediaElement with video? Platform: SL3, but SL4 will be good as well. Thanks in advance.

    Read the article

  • How to initialize audio with Vala/SDL

    - by ioev
    I've been trying to figure this out for a few hours now. In order to start up the audio, I need to create an SDL.AudioSpec object and pass it to SDL.Audio.Open. The problem is, AudioSpec is a class with a private constructor, so when I try to create one I get: sdl.vala:18.25-18.43: error: `SDL.AudioSpec' does not have a default constructor AudioSpec audiospec = new SDL.AudioSpec(); ^^^^^^^^^^^^^^^^^^^ And if I try to just assign values to it's member vars like a struct (it's a struct in normal sdl) I get: sdl.vala:20.3-20.25: error: use of possibly unassigned local variable `audiospec' audiospec.freq = 22050; ^^^^^^^^^^^^^^^^^^^^^^^ I found the valac doc here: http://valadoc.org/sdl/SDL.AudioSpec.html But it isn't much help at all. The offending code block looks like this: // setup the audio configuration AudioSpec audiospec; AudioSpec specback; audiospec.freq = 22050; audiospec.format = SDL.AudioFormat.S16LSB; audiospec.channels = 2; audiospec.samples = 512; // try to initialize sound with these values if (SDL.Audio.open(audiospec, specback) < 0) { stdout.printf("ERROR! Check audio settings!\n"); return 1; } Any help would be greatly appreciated!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >