Search Results

Search found 21392 results on 856 pages for 'audio output'.

Page 7/856 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Can output from OutputDebugString be viewed in VisualStudio's output window

    - by wageoghe
    I am using C# and VS2010. When I use OutputDebugString to write debug information, should it show up in the output window? I can see the output from OutputDebugString in DebugView, but I thought I would see it in Visual Studio's Output window. I have looked under Tools-Options-Debugging-General and the output is NOT being redirected to the Immediate window. I have also looked under Tools-Options-Debugging-Output Window and all General Output Settings are set to "On". Finally, I have used the drop-down list in the Output window to specify that Debug messages should appear. If I change Tools-Options-Debugging-General to redirect the output to the Immediate window, the OutputDebugString messages do not appear in the immediate window. Here is my entire test program: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.Diagnostics; namespace OutputDebugString { class Program { [DllImport("kernel32.dll", CharSet = CharSet.Auto)] public static extern void OutputDebugString(string message); static void Main(string[] args) { Console.WriteLine("Main - Enter - Console.WriteLine"); Debug.WriteLine("Main - Enter - Debug.WriteLine"); OutputDebugString("Main - Enter - OutputDebugString"); OutputDebugString("Main - Exit - OutputDebugString"); Debug.WriteLine("Main - Exit - Debug.WriteLine"); Console.WriteLine("Main - Exit - Console.WriteLine"); } } } If I run within the debugger, the Debug.WriteLine output does show up in the output window, but the OutputDebugString output does not. If I run from a console window, both Debug.WriteLine and OutputDebugString show up in DebugView. Why doesn't the OutputDebugString output ever show up in the output window? Ultimately, my intent is not to write a lot of debug output with OutputDebugString, rather I will use System.Diagnostics or NLog or something similar. I am just trying to find out, if I configure a logging platform to write to OutputDebugString, will the output be visible from within the debugger. Edit: I went back to my original program (not the simple test above) which uses TraceSources and TraceListeners configured via the app.config file. If I configure the trace sources to write to the System.Diagnostics.DefaultTraceListener (which is documented as writing to OutputDebugString), then the trace source output DOES go to the debug window. However, lines that write directly with OutputDebugString (such as in my simple example) DO NOT go to the debug window. Also, if I use a different TraceListener that writes to OutputDebugString (I got one from Ukadc.Diagnostics at codeplex), that output DOES NOT go to the debug window. Note that I have seen these questions but they did not provide a working solution: here and here

    Read the article

  • HTML5 <audio> Safari live broadcast vs not

    - by Peter Parente
    I'm attempting to embed an HTML5 audio element pointing to MP3 or OGG data served by a PHP file . When I view the page in Safari, the controls appear, but the UI says "Live Broadcast." When I click play, the audio starts as expected. Once it ends, however, I can't start it playing again by clicking play. Even using the JS API on the audio element and setting currentTime to 0 fails with an index error exception. I suspected the headers from the PHP script were the problem, particularly missing a content length. But that's not the case. The response headers include a proper Content- Length to indicate the audio has finite size. Furthermore, everything works as expected in Firefox 3.5+. I can click play on the audio element multiple times to hear the sound replay. If I remove the PHP script from the equation and serve up a static copy of the MP3 file, everything works fine in Safari. Does this mean Safari is treating audio src URLs with query parameters differently than URLs that don't have them? Anyone have any luck getting this to work? My simple example page is: <!DOCTYPE html> <html> <head></head> <body> <audio controls autobuffer> <source src="say.php?text=this%20is%20a%20test&format=.ogg" /> <source src="say.php?text=this%20is%20a%20test&format=.mp3" /> </audio> </body> </html> HTTP Headers from PHP script: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 15:39:34 GMT Server: Apache X-Powered-By: PHP/5.2.10 Content-Length: 8993 Keep-Alive: timeout=2, max=98 Connection: Keep-Alive Content-Type: audio/mpeg HTTP Headers from direct file access: HTTP/1.x 200 OK Date: Sun, 03 Jan 2010 20:06:59 GMT Server: Apache Last-Modified: Sun, 03 Jan 2010 03:20:02 GMT Etag: "a404b-c3f-47c3a14937c80" Accept-Ranges: bytes Content-Length: 8993 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: audio/mpeg I tried hard-coding the Accept-Ranges header into the script too, but no luck.

    Read the article

  • DVI to VGA adapter only working in one output

    - by Tom Jenkinson
    I have a AMD Radeon HD 6800 Series graphics card which has 2 DVI outputs. I tried to set it up with 2 monitors which both have vga connectors on the end and used 2 dvi to vga adapters like these. http://www.tvcables.co.uk/images/items/vga-to-dvi.jpg For some reason nothing would reach the monitor, it would remain in standby, if it was plugged into the second output. I tried all the different combinations of cables, adapters and the 2 monitors but whichever monitor that was plugged into the second output wouldn't work. I then randomly decided to plug a different monitor into the second output which has a dvi connector on the end so there was no need for an adapter and plugged it in and it worked! Does anyone know why the second output on the graphics card won't work with a dvi to vga adapter (and the first output will)? I'm really confused! Thanks

    Read the article

  • Windows Audio Issue

    - by Nikki
    This one is driving me nuts. Hoping someone can shed some light. I'm running windows 7 using onboard audio. It's been fine for over 2 years but lately there's a problem every time I play audio. I hear a small soft burst of static and the volume turns itself down from 50% to 23%. Once at 23%, it plays fine. No related events logged in viewer. No reported problems with the device. Different headphones, same problem. I played around with audio settings for hours but the problem persists. EDIT: ok more info: Motherboard: ECS G31T-M LGA775 System info displays this: Name High Definition Audio Device Manufacturer Microsoft Status OK PNP Device ID HDAUDIO\FUNC_01&VEN_1106&DEV_E721&SUBSYS_10192683&REV_1001\4&3D4E739&0&0001 Driver c:\windows\system32\drivers\hdaudio.sys (6.1.7600.16385, 297.00 KB (304,128 bytes), 14/07/2009 9:51 AM) I'll keep adding info as I find it. The question I want resolved is; Is it faulty hardware? If so, I can buy a sound card. I can't imagine software is responsible since I haven't installed anything new for weeks. Virus scans are clear as well. The static burst is irritating to say the least. Tried 2 different headphones and separate speakers. Same problem. I know it's not an easy problem but I was hoping someone had encountered the same thing.

    Read the article

  • Simulating audio playback on headless linux server

    - by afro
    Hi people, We have a headless linux server (Debian 5) we use for runnin integration tests of our web-page code. Among these tests are ones implemented using Selenium, which practically simulates a user browsing our pages and clicking on things. One of these tests is failing now, because it involves starting a flash-based audio player and checking to see whether the progress bar gets displayed properly. The reason this test fails is that there is no way to play the audio, and no sound card on the machine, which has simple webserver hardware. So, my question would be: Is there a simple way of giving a program the impression that its audio output is being processed, and playback is taking place? I don't have to record the playback, or redirect it or anything like that, just a dummy soundcard, like the dummy X-server we aer using, which actually does not need to display stuff. I have tried using JACK, but it's too complicated, and the documentation does not even answer this very simple question. I also installed alsa on the server; it 'pretends' to run, but when a program tries to play audio, just spews error and debug information having to do with the non-existence of a soundcard. It would be really awesome if one of you has a simple answer to this question. Cheers, Ulas

    Read the article

  • Configure audio on HP ENVY 4 ultrabook

    - by phodu_insaan
    I want to configure audio for ubuntu 12.04 on my laptop. Currently the audio just does not play. If i try and plug in headphones then somewhere midway to being fully plugged in the audio plays on the headphones, I plug in further and the sound disappears. How do I get this to work? lspci | grep audio Audio device: Intel Corporation Panther Point High Definition Audio Controller My laptop is one the beats edition HP laptops, and the driver for win7 was an IDT HD audio driver. --EDIT-- The output for cat /proc/asound/card0/codec* | grep Codec is Codec: IDT 92HD91BXX Codec: Intel PantherPoint HDMI I need to get both the IDT card to work and the HDMI card to work with my TV. --EDIT-- --EDIT 2-- I have added blacklist snd-usb-audio to the end of the file /etc/modprobe.d/alsa-base.conf Now the sound plays from my laptop speakers only when I plug in a headphones/external speaker. Otherwise no sound. :( Please help getting everything working as it should. --EDIT 2-- Thanks

    Read the article

  • About AMR audio file playing issue on different devices

    - by user352537
    I have got a quite strange problem here. I am developing an IM software and need to play audio files recorded by another client on Android. The same audio file I've got can be played with AVAudioPlayer on 3GS(IOS 4.2.1) device and simulator 4.2. But when I tried by play it on iPhone4(iOS 4.3.3), the function "play" always return NO. I also tried with two iPhone devices, the audio files recorded by iPhone client can be played on both 3GS and iPhone4. So I asked the Android developers about the record parameters they've used. They said that the "AudioEncoder" used by them was "DEFAULT". There are also some other parameters as following: **private AudioEncoder() {} public static final int DEFAULT = 0; /** AMR (Narrowband) audio codec */ public static final int AMR_NB = 1; /** @hide AMR (Wideband) audio codec */ public static final int AMR_WB = 2; /** @hide AAC audio codec */ public static final int AAC = 3; /** @hide enhanced AAC audio codec */ public static final int AAC_PLUS = 4; /** @hide enhanced AAC plus audio codec */ public static final int EAAC_PLUS = 5;** Does anybody know what's the matter?

    Read the article

  • Writing an audio player in C#

    - by Malki
    Hi, I have a pretty cool idea for a very special media player. I like to think about this project as a mini-startup, since I don't yet know if my idea is practical. Anyways, before implementing my idea, I first need to be able to implement a simple audio player. My preferred language for this project is C#, simply because it's so easy to use, but any other object oriented language would be fine too I guess. I started out with no knowledge whatsoever about audio. My main goals right now are: Being able to play audio files - as many formats as possible (sort of a VLC type player, but only audio for now). Being able to analyze audio files - as in, reading frequency, amplitude, volume, and other information about the audio. I think maybe a good idea here is to be able to analyze one file format (PCM?), and then temporarily converting any file I want to analyze to that format. This is in order to later implement a mechanism that compares songs and identifies similar songs to recommend to the user (this feature isn't part of my idea, but I figured since it exists in many players nowadays, I need to have it too if I want be able to compete with them). BTW - I currently don't have any knowledge about audio/wavelengths/frequencies and such, so I'd appreciate it if someone could point me in the right direction about this analyzation feature. Maybe in the future I'd expand to playing video files as well, but for now I'm concentrating on audio. After searching the Internet for a while, I've come across LAME. Problem is, it's not C#, and I'm not sure how to use it. I know there is something called "Interoperability", that is supposed to let me work with native DLL files through C#. Any information about that would be helpful as well. Any help would be much appreciated. Thanks, Malki :)

    Read the article

  • iPhone SDK Smaller CAF files: lower recording quality with Audio Queues?

    - by Nick
    In my iPhone app I have voice recording functionality the utilizes Audio Queue voice recording functions of the SDK. I'm saving directly to CAF format and using the following settings for the AudioStreamBasicDescription reference: audioFormat.mFormatID = kAudioFormatLinearPCM; I can see that there are other format ids I could use like: kAudioFormatLinearPCM kAudioFormatAppleLossless kAudioFormatAppleIMA4 kAudioFormatiLBC kAudioFormatULaw kAudioFormatALaw My knowledge of sound formats is very limited so my question is... which of these should I use to create the lowest compressed audio recording files? Plus, are there other settings I should apply to lower the quality and filesize even further?

    Read the article

  • How to program a real-time accurate audio sequencer on the iphone?

    - by Walchy
    Hi... I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues. In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code: init: int channelGroups[1]; channelGroups[0] = 8; soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1]; int i=0; for(NSString *soundName in [NSArray arrayWithObjects:@"base1", @"snare1", @"hihat1", @"dit", @"snare", nil]) { [soundEngine loadBuffer:i fileName:soundName fileType:@"wav"]; i++; } [NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:@selector(drumLoop:) userInfo:nil repeats:YES]; In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer. audio loop: - (void)drumLoop:(NSTimer *)timer { for(int track=0; track<4; track++) { unsigned char note=pattern[track][step]; if(note) [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO]; } if(++step>=16) step=0; } Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync. As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer? Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution. Does anybody has any idea? Thanks for any help in advance! Best regards, Walchy

    Read the article

  • Best practice for C++ audio capture API under Linux?

    - by braddock
    I need to create a C++ application with a simple audio recording from microphone functionality. I can't say that there aren't enough audio APIs to do this! Pulse, ALSA, /dev/dsp, OpenAL, etc. My question is what is the current "Best practice" API? Pulse seems supported by most modern distros, but seems almost devoid of documentation. Will OpenAL be supported across different distros, or is it too obscure? Have I missed any? Is there not a simple answer? thanks!

    Read the article

  • Why does my audio keep reverting to stereo?

    - by GraemeF
    I have an ASUS P7P55D LE motherboard with onboard sound and I am running Windows 7 64-bit, and I am using the SPDIF output from my motherboard to my receiver. On my receiver I can see which audio channels are in use (i.e. stereo or 5.1) In the audio interface properties I can test DTS Audio and Dolby Digital outputs and both work fine, but when I try to play a game with 5.1 sound (I've tried Left 4 Dead and Dragon Age Origins) it reverts to stereo. I was getting this behaviour with the default Microsoft drivers so I installed the latest from the ASUS website but there is no difference. I notice that in the Advanced tab the Default Format only allows me to choose from various 2 channel formats so maybe that is something to do with it? How can I get 5.1 output all of the time?

    Read the article

  • Windows 7 Audio problem..

    - by Marcx
    hi, I've a dell studio 1555 bought on september with Windows 7 64bit Professional on it. Audio devide works proprerly, while listening/watching audio/video contents (from disk or internet) Using VoIP Program Ventrilo Audio from others people works good, and I hear their voices clearly... Using VoIP Program such as Teamspeak 2/3, MSN, Skype I hear a really disturbed voice, and it's impossible to comprehend somethings... Anyway I can say that the first week that I've bought and installed Win7 Skype worked good, this problem happened after I installed Ventrilo... I tried to remove it (ventrilo) but nothing happened:( Please help me :D Thanks Marcx

    Read the article

  • Integrated Graphics and Audio as Media Center?

    - by Will
    I'm considering setting up a PC as a Media Center. Mainly to watch movies (ideally HD quality) and listening to music, but also to perform tasks like e-mail, web browsing, ... I quite like the looks and the price of this barebone: http://www.asus.de/Barebone_PC/S_Series_7L/S2P8H61E However it comes with integrated graphics and audio and only has one free PCI-Express slot. Which would mean in the worst case, where both integrated graphics and audio turn out to be insufficient, I could only upgrade one. So is integrated graphics and audio sufficient for a media center solution? Cheers, Will

    Read the article

  • Audio broken on clean install of ThinkPad T60

    - by Ben Alpert
    I just reinstalled a clean version of XP SP3 on a ThinkPad T60. After downloading all of the missing drivers from the Lenovo site, everything seems to work except for the audio. I installed the audio driver from http://www-307.ibm.com/pc/support/site.wss/MIGR-62928.html (version 5.10.1.4326) so SoundMAX shows up in the control panel and everything in the Sound preferences looks okay, but no sound comes out of the speakers when I try to play music or hear the interface sound effects. There's also no sound if I plug in headphones and listen. Some forums suggested installing hotfix Q888111 but it's already included in SP3 so I can't install it. I also tried reinstalling the audio driver mentioned above in case something went wrong the first time, but it still doesn't work. Does anyone know how to make the sound work?

    Read the article

  • Enable multiple audio output on Windows 7

    - by patrick
    For Windows 7, 64 bit: I have a digital SPDIF output to my stereo, which controls speakers in other rooms. I also have a set of speakers connected to the regular audio jack at the computer. This allows me to send music to the kitchen while my child plays games on the computer. Works great. Except when I'm playing games and still want to listen to music. ;-D I know I can manually switch WMP to play through the speakers instead of SPDIF, but I was wondering if there's any way to enable simultaneous audio out in Windows 7? Virtual Audio Card is a non-starter because I'm running 64 bits and the VAC driver isn't signed.

    Read the article

  • HDMI Audio drops out when display enters powersave

    - by Jared Tritsch
    I have a Windows 8 machine with an AMD APU attached to my Home Theater system through HDMI (HDMI routes through a Home Theater AMP, then into the TV). Here's my problem, Whenever the display is interrupted, usually by the TV being turned off or into powersave mode, the audio device lists as "Disconnected" in windows audio devices and I cant get it to re-recognize that the HDMI audio is, in fact, plugged in. The only solution I have found so far is to restart the machine, which will then recognize the device without any problems, until the next time the TV turns off and the problem once again resurfaces. Has anyone else seen this phenomenon? I have no idea if its the GPU, the HDMI interface, the AMP, or even the TV itself, as there really isn't much a way to tell...

    Read the article

  • Logitech bluetooth audio receiver quality

    - by lietus
    I bought a Logitech Bluetooth Audio receiver, and have run into a problem: When playing audio, it contains short hisses/clicks on higher tones or vocal+heavy instrumental music (Sound quality appears to be unclean). I'm not sure what could be the reason, but this only happens when I play from certain devices. My laptops (a Dell XPS Win7, and an Asus Eee seashell Win7) and my phone (Samsung Player 5) both have the problem. However, when I tried using a Samsung S2 phone, the audio was crystal clear. Seems that this could be something with Bluetooth transiting device.. Has anyone encountered a similar problem?

    Read the article

  • Ubuntu 11.04: No Audio Output

    - by Jason George
    I installed a fresh copy of Ubuntu 11.04 earlier this week and I'm having trouble getting my audio online. It was working fine in 10.04 and all the resources I can find on troubleshooting seem to be fairly dated so I'm not sure if they apply. CMI8788 [Oxygen HD Audio] Analog Stereo Duplex Playing a WMA file shows 0.00db output when I mouse over the sound controller in the status bar. Obviously, no output from my speakers. I tried adjusting the profile, thinking I might have the wrong one. That seems to have made things worse. Where mouse over originally said something along the lines of "Oxygen HD Audio," it now reads "Dummy Output." Selecting "Test Speakers" in sound preferences crashes the dialog. Any pointers would be great.

    Read the article

  • FFmpeg add multiple audio files to video at specific points

    - by Arran
    I have two audio files, each about 3 minutes long. I want to take the first 10 seconds of each file and add them each to a video file at specific points - 0 seconds and 10 seconds. So the resulting video should be 20 seconds long. I've got this far: ffmpeg -i video.mov -ss 0 -t 20 -itsoffset 0 -i audio1.mp3 -itsoffset 10 -i audio2.mp3 -acodec copy -vcodec copy out.mov ...but the resulting video has 20 seconds of the first audio file only, the second audio file doesn't start at 10 seconds like it should. Any help would be appreciated, thanks!

    Read the article

  • Setting Up a Virtual Sound Device To Stream Output via TCP/IP

    - by Martindale
    I'm interested in installing a custom driver / device that will open a socket (or listen) and stream audio via TCP/IP in both Windows and Linux. I would like to be able to specify this device as my "Output" for specific applications so that I can route my audio through a completely unique machine (for example, in complex Synergy setups where my headphones might be connected to one machine, but audio is being generated by another.) In Linux, I expect this to be very easy, but I anticipate having to install a custom driver in Windows.

    Read the article

  • windows-7 hdmi audio

    - by YDG
    Hello, I am using windows 7 and here is my question : Is there a way for windows to swtich my audio output to HDMI automatically when a HDMI cable is plugged in ? Right now, to switch the audio output I need to go specify my default output to HDMI everytime I plug it in and set it back to speakers once I plug it out. Cheers !

    Read the article

  • synchronizing audio over a network

    - by sharkin
    I'm in startup of designing a client/server audio system which can stream audio arbitrarily over a network. One central server pumps out an audio stream and x number of clients receives the audio data and plays it. So far no magic needed and I have even got this scenario to work with VLC media player out of the box. However, the tricky part seems to be synchronizing the audio playback so that all clients are in audible synch (actual latency can be allowed as long as it is perceived to be in sync by a human listener). My question is if there's any known method or algorithm to use for these types of synchronization problems (video is probably solved the same way). My own initial thoughts centers around synchronizing clocks between physical machines and thereby creating a virtual "main timer" and somehow aligning audio data packets against it. Some products already solving the problem: http://www.sonos.com http://netchorus.com/ Any pointers are most welcome. Thanks. PS: This related question seem to have died long ago.

    Read the article

  • Audio Detection in Matlab

    - by insane-36
    I am writing a matlab code that would be able to read the audio file and then compare it to the another audio and recognize if those audio are the voice of the same person. In both type of the audio, would have the same word utterance and the audio is about 1 minutes long. I have come to know that the approach of sliding windows using hamming window would work best on this approach but have a very little idea on this. The simple code to read an audio file and then display a portion of 10s is as below : [x,fs, nbits]= wavread('01-AudioTrack 01.wav'); subplot(211) plot(x) title('Entire Wave') smallRange = 1:100000; subplot(212) plot(smallRange,x(smallRange)) How do I make Hamming window each of 10ms in this case and what approaches should I take to deal with this problem ?

    Read the article

  • Audio/video streaming on Windows platform

    - by bushtucker
    I'm building an interactive language learning application to be used in a classroom environment. The idea is that a teacher should be able to talk to the students (=audio stream to all students), let students talk to each other (= audio P2P) in groups of two or more, let students watch a video coming from a the DVD player or coming from a media server. It should be possible to save the audio/video streams. The teacher should also be able to monitor, take-over or block the desktop of the students. The platform is Windows and it's a desktop application, no web application. The audio delay should be as minimal as poosible. Optionally a student sitting at home should be supported, but it's not a high priority. I am now finished with the classroom control part of the application (login, monitor, block, ...) and want to start the audio and video part. I've been evaluating several options like DirectX, GStreamer and SIP but now I have to make a decision. DirectX seems an obvious choice for the Windows platform, but it only lets me capture and playback audio and video. The encoding/decoding/network part I should do myself. GStreamer contains all kinds of options to capture/encode/stream/save audio and video streams. I've experimented a bit with it (ossbuild) and it does seem to involve a lot of trial and error to make something work: - microphone capture (via directsoundsrc) produces cracking noises on some computers - rtpL16 payloader didn't work well - streaming raw audio over the network only working at a sampling rate of 8000, no higher - there are a lot of errors when receiving mpeg4 video (bad I-frame), on some computers worse than others It is my impression that gstreamer is primary targetted at linux platforms. Development and support for the Windows platform seems to be a little behind. Nevertheless it's a powerful framework that could save me months and years of work. SIP seems to be able to do everything I want, but it is targeted towards telephony and IM. I don't know how flexible SIP is. It seems to me that the SIP layer would just be overhead as I already have a central (teacher) application that can control and setup all the streams. The interesting parts of frameworks like opalvoip and freeswitch are the actual audio/video capture, the encoding and transmission. Does anyone know how these interesting parts relate a framework like gstreamer? Are they easy to integrate into a custom application? Are they flexible enough? Does anyone have experience with all or one of these technologies? Maybe there are even other options I can look at? Many thanks for your advice

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >