Search Results

Search found 9074 results on 363 pages for 'audio encoding'.

Page 107/363 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • what (clip) and DataLine.Info represents...?

    - by user528050
    I got this code from one of my friend. import java.io.*; import javax.sound.sampled.*; public class xx { public static void main(String args[]) { try { File f=new File("mm.wav"); AudioInputStream a=AudioSystem.getAudioInputStream(f); AudioFormat au=a.getFormat(); DataLine.Info di=new DataLine.Info(Clip.class,au); Clip c=(Clip)AudioSystem.getLine(di); c.open(a); c.start(); } catch(Exception e) { System.out.println("Exception caught "); } } } But i didn't understand what this line means Cilp c=(Clip)AudioSystem.getLine(di); what (clip) represents....? And my 2nd problem is what is the DataLine is it an interface and what is the meaning of this statement DataLine.Info....?

    Read the article

  • UITableView with Shake and play.

    - by avural79
    hi all i am trying to make a musical app for iphone. the app is simple. there is a couple of musical note sample (caf) files. when user taps the predefined positions on uiview(like strings). app plays note sample and add a string value to a nsmutablearray about note. played note lists displays in a table. now i want to add a shake and play mode to app. when user shake iphone, recorded notes start to play from first record to last record and loop again. also if user shake iphone harder notes will plays faster. how can i do that. any idea? thanks

    Read the article

  • Android Loading & Playing Sound Based on String

    - by Chance
    I'm currently working on a simple Android app, and right now I am trying to get it to load in and play sounds. The problem I am faced with is that I want the sound it uses to be based on a string (With the same name as the sound file). The reason for this is simplicity in both the code and adding on to it. Now unfortunately I can't just slap a string in place of referencing the actual sound, but is there some way for me to compare a string to the entire raw folder to find the matching sound, or some other alternative short of defining every sound manually? Thank you for your time.

    Read the article

  • Preinitialize BackgroundAudioPlayer in WP7?

    - by kgrevehagen
    When I am using the BackgroundAudioPlayer in my Windows Phone 7 application, it takes a lot of time to load the first time I want to play a song. Is there any way of preinitializing the BackgroundAudioPlayer before playing the first track, so that when I start playing, it starts right along? I have googled it, but no luck. I am just using BackgroundAudioPlayer.Instance when I e.g. want to play, pause, stop etc an audiotrack. Is there something other i could do to fix this? Thanks in advance!

    Read the article

  • Drawing a waveform in C#

    - by user488792
    Hi! I want to be able to display a WaveForm in C#, along with some simple features such as zooming and selection. I already have the data as a short[] of amplitude values. However, I am an amateur when it comes to hardcoding GUI. I have already found a possible helper class WaveFormClass that may help me achieve this but as a backup, I want to learn how to manually do it. So may I ask for some methods and possibly some links that will help? Thanks!

    Read the article

  • Getting Frequency Components with FFT

    - by ruhig brauner
    so I was able to solv my last problem but i stubmled upon the next already. So I want to make a simple spectrogram but in oder to do so I want to understand how FFT-libaries work and what they actually calculate and return. (FFT and Signal Processing is the number 1 topic I will get into as soon as I have time but right now, I only have time for some programming exercises in the evening. ;) ) Here I just summarized the most important parts: int framesPerSecond; int samplesPerSecond; int samplesPerCycle; // right now i want to refresh the spectogram every DoubleFFT_1D fft; WAVReader audioIn; double audioL[], audioR[]; double fftL[], fftR[]; ..... framesPerSecond = 30; audioIn= new WAVReader("Strobe.wav"); int samplesPerSecond = (int)audioIn.GetSampleRate(); samplesPerCycle = (int)(audioIn.GetSampleRate()/framesPerSecond); audioL = new double[samplesPerCycle*2]; audioR = new double[samplesPerCycle*2]; fftL = new double[samplesPerCycle]; fftR = new double[samplesPerCycle]; for(int i = 0; i < samplesPerCycle; i++) { // don't even know why,... fftL[i] = 0; fftR[i] = 0; } fft = new DoubleFFT_1D(samplesPerCycle); ..... for(int i = 0; i < samplesPerCycle; i++) { audioIn.GetStereoSamples(temp); audioL[i]=temp[0]; audioR[i]=temp[1]; } fft.realForwardFull(audioL); //still stereo fft.realForwardFull(audioR); System.out.println("Check"); for(int i = 0; i < samplesPerCycle; i++) { //storing the magnitude in the fftL/R arrays fftL[i] = Math.sqrt(audioL[2*i]*audioL[2*i] + audioL[2*i+1]*audioL[2*i+1]); fftR[i] = Math.sqrt(audioR[2*i]*audioR[2*i] + audioR[2*i+1]*audioR[2*i+1]); } So the question is, if I want to know, what frequencys are in the sampled signal, how do I calculate them? (When I want to print the fftL / fftR arrays, I get some exponential formes at both ends of the array.) Thx :)

    Read the article

  • sound not playing when i press the button and how to fix overlapping sounds

    - by alfredjunco
    the code is giving me an error"Unused variable'path'" and when i press a button there is no sound playing how do i fix this the aSound is in the h file - (void)playOnce:(NSString *)aSound; - (IBAction) beatButton50 { [self playOnce:@"racecars"]; } - (void)playOnce:(NSString *)aSound { NSString *path = [[NSBundle mainBundle] pathForResource:aSound ofType:@"caf"]; if([theAudio isPlaying]) { [theAudio stop]; } } - (void)playLooped:(NSString *)aSound { NSString *path = [[NSBundle mainBundle] pathForResource:aSound ofType:@"caf"]; if (!theAudio) { theAudio = [[AVAudioPlayer alloc] initWithContentsOfURL: [NSURL fileURLWithPath: path] error: NULL]; } [theAudio setDelegate: self]; // loop indefinitely [theAudio setNumberOfLoops: -1]; [theAudio setVolume: 1.0]; [theAudio play]; } - (void)stopAudio { [theAudio stop]; [theAudio setCurrentTime:0]; }

    Read the article

  • Uninterrupted mp3 play on a website?

    - by Kevin
    Client is requesting a single track to be heard across the website. Generally I advise against it, but they insist. So, what is the most straightforward way of having a flash player embedded in a site, and when a user goes to another page there isn't a gap/interruption? I am thinking an iframe is required.. I am using a flash player that has autoresume, but that only solves picking up where you last left off on the song before going to another page. I tried searching SO for an answer..

    Read the article

  • CMake: Mac OS X: ld: unknown option: -soname

    - by Alex Ivasyuv
    I try to build my app with CMake on Mac OS X, I get the following error: Linking CXX shared library libsml.so ld: unknown option: -soname collect2: ld returned 1 exit status make[2]: *** [libsml.so] Error 1 make[1]: *** [CMakeFiles/sml.dir/all] Error 2 make: *** [all] Error 2 This is strange, as Mac has .dylib extension instead of .so. There's my CMakeLists.txt: cmake_minimum_required(VERSION 2.6) PROJECT (SilentMedia) SET(SourcePath src/libsml) IF (DEFINED OSS) SET(OSS_src ${SourcePath}/Media/Audio/SoundSystem/OSS/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/OSS/Mixer/Mixer.cpp ) ENDIF(DEFINED OSS) IF (DEFINED ALSA) SET(ALSA_src ${SourcePath}/Media/Audio/SoundSystem/ALSA/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/ALSA/Mixer/Mixer.cpp ) ENDIF(DEFINED ALSA) SET(SilentMedia_src ${SourcePath}/Utils/Base64/Base64.cpp ${SourcePath}/Utils/String/String.cpp ${SourcePath}/Utils/Random/Random.cpp ${SourcePath}/Media/Container/FileLoader.cpp ${SourcePath}/Media/Container/OGG/OGG.cpp ${SourcePath}/Media/PlayList/XSPF/XSPF.cpp ${SourcePath}/Media/PlayList/XSPF/libXSPF.cpp ${SourcePath}/Media/PlayList/PlayList.cpp ${OSS_src} ${ALSA_src} ${SourcePath}/Media/Audio/Audio.cpp ${SourcePath}/Media/Audio/AudioInfo.cpp ${SourcePath}/Media/Audio/AudioProxy.cpp ${SourcePath}/Media/Audio/SoundSystem/SoundSystem.cpp ${SourcePath}/Media/Audio/SoundSystem/libao/AO.cpp ${SourcePath}/Media/Audio/Codec/WAV/WAV.cpp ${SourcePath}/Media/Audio/Codec/Vorbis/Vorbis.cpp ${SourcePath}/Media/Audio/Codec/WavPack/WavPack.cpp ${SourcePath}/Media/Audio/Codec/FLAC/FLAC.cpp ) SET(SilentMedia_LINKED_LIBRARY sml vorbisfile FLAC++ wavpack ao #asound boost_thread-mt boost_filesystem-mt xspf gtest ) INCLUDE_DIRECTORIES( /usr/include /usr/local/include /usr/include/c++/4.4 /Users/alex/Downloads/boost_1_45_0 ${SilentMedia_SOURCE_DIR}/src ${SilentMedia_SOURCE_DIR}/${SourcePath} ) #link_directories( # /usr/lib # /usr/local/lib # /Users/alex/Downloads/boost_1_45_0/stage/lib #) IF(LibraryType STREQUAL "static") ADD_LIBRARY(sml-static STATIC ${SilentMedia_src}) # rename library from libsml-static.a => libsml.a SET_TARGET_PROPERTIES(sml-static PROPERTIES OUTPUT_NAME "sml") SET_TARGET_PROPERTIES(sml-static PROPERTIES CLEAN_DIRECT_OUTPUT 1) ELSEIF(LibraryType STREQUAL "shared") ADD_LIBRARY(sml SHARED ${SilentMedia_src}) # change compile optimization/debug flags # -Werror -pedantic IF(BuildType STREQUAL "Debug") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -ggdb") ELSEIF(BuildType STREQUAL "Release") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -O3 -fomit-frame-pointer") ENDIF() SET_TARGET_PROPERTIES(sml PROPERTIES CLEAN_DIRECT_OUTPUT 1) ENDIF() ### TEST ### IF(Test STREQUAL "true") ADD_EXECUTABLE (bin/TestXSPF ${SourcePath}/Test/Media/PlayLists/XSPF/TestXSPF.cpp) TARGET_LINK_LIBRARIES (bin/TestXSPF ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/test1 ${SourcePath}/Test/test.cpp) TARGET_LINK_LIBRARIES (bin/test1 ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/TestFileLoader ${SourcePath}/Test/Media/Container/FileLoader/TestFileLoader.cpp) TARGET_LINK_LIBRARIES (bin/TestFileLoader ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/testMixer ${SourcePath}/Test/testMixer.cpp) TARGET_LINK_LIBRARIES (bin/testMixer ${SilentMedia_LINKED_LIBRARY}) ENDIF (Test STREQUAL "true") ### TEST ### ADD_CUSTOM_TARGET(doc COMMAND doxygen ${SilentMedia_SOURCE_DIR}/doc/Doxyfile) There was no error on Linux. Build process: cmake -D BuildType=Debug -D LibraryType=shared . make I found, that incorrect command generate in CMakeFiles/sml.dir/link.txt. But why, as the goal of CMake is cross-platforming.. How to fix it?

    Read the article

  • Optimal video resolution and encoding for recording games for YouTube?

    - by Rookie
    I want to record video from games, therefore I cannot use very large video resolution, but I still want to make the large video view to look as sharp as the original encoded video before upload. I tried to use YouTube's recommended 854x640 resolution, but it wasn't possible with h264 and the encoding software I used (Handbrake) converted it to a width of the nearest multiple of 4, which I think is a limitation of the h264 format. The video I encoded was sharp and fine quality, but when I uploaded it to YouTube, it lost a lot of quality and the preferred large video view looks almost as bad as a 320p video. I tried to wait a few days but it never got sharper (in case it didn't process it completely yet). So, which resolution and encoding options I should use, if I want the large video player to have the sharpest possible video, retaining the original video quality as good as possible? I noticed that recording with 640x480, the video was sharper than with 1280x720, so I'm not sure what im doing wrong here; both were h264. Is it anyhow possible to prevent YouTube from re-encoding the videos? I just wonder how people can make so sharp videos, while mine are all blurry after upload, but before upload they looked fine. I also tried YouTube's suggested bitrates with h264, but it didn't work any better.

    Read the article

  • How to sync audio files with Logitech media server in MAC OS?

    - by Abhishek
    I want to customize the Logitech Media Server (web interface on localhost) so that N number of DIFFERENT audio files will start to play at the same time on N number of wifi receivers, each file on a different receiver. Currently, the server will sync only 1 track to N number(amount) of receivers. Is it possible with Logitech media server is open source. How can I able to do this? can you explain me sample code?

    Read the article

  • Ask HTG: Dealing with Windows 8 CP Expiry, Nintendo DS Save Backups, Jumbled Audio Tracks in Windows Media Player

    - by Jason Fitzpatrick
    Once a week we round up some great reader questions and share the answers with everyone. This week we’re looking at what to do when Windows 8 Consumer Preview expires, backing up your Nintendo DS saves, and how to sort out jumbled audio tracks in Windows Media Player movies. How To Be Your Own Personal Clone Army (With a Little Photoshop) How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    Read the article

  • How to record both audio, Where i have one music running and my microphone is in use?

    - by YumYumYum
    I have one music playing, and i have microphone open, already the microphone is used by other application. In such case, how can i record that music and the microphone audio to a file? (if possible with command line). Follow up: $ rec new-file.wav Input File : 'default' (alsa) Channels : 2 Sample Rate : 48000 Precision : 16-bit Sample Encoding: 16-bit Signed Integer PCM In:0.00% 00:00:25.94 [00:00:00.00] Out:1.24M [ | ] Clip:0 ^C $ sox -d new-file.wav

    Read the article

  • Join mp4 files in linux

    - by Jose Armando
    I want to join two mp4 files to create a single one. The video streams are encoded in h264 and the audio in aac. I can not re-encode the videos to another format due to computational reasons. Also, I cannot use any gui programs, all processing must be performed with linux command line utilities. FFmpeg cannot do this for mpeg4 files so instead I used MP4Box e.g. MP4Box -add video1.mp4 -cat video2.mp4 newvideo.mp4 unfortunately the audio gets all mixed up. I thought that the problem was that the audio was in aac so I transcoded it in mp3 and used again MP4Box. In this case the audio is fine for the first half of newvideo.mp4 (corresponding to video1.mp4) but then their is no audio and I cannot navigate in the video also. My next thought was that the audio and video streams had some small discrepancies in their lengths that I should fix. So for each input video I splitted the video and audio streams and then joined them with the -shortest option in ffmpeg. thus for the first video I ran avconv -y -i video1.mp4 -c copy -map 0:0 videostream1.mp4 avconv -y -i video1.mp4 -c copy -map 0:1 audiostream1.m4a avconv -y -i videostream1.mp4 -i audiostream1.m4a -c copy -shortest video1_aligned.mp4 similarly for the second video and then used MP4Box as previously. Unfortunately this didn't work either. The only success I had was when I joined the video streams separetely (i.e. videostream1.mp4 and videostream2.mp4) and the audio streams (i.e. audiostream1.m4a and audiostream2.m4a) and then joined the video and audio in a final file. However, the synchronization is lost for the second half of the video. Concretelly, there is a 1 sec delay of audio and video. Any suggestions are really welcome.

    Read the article

  • How to upload binary (audio) data from a Flash AS3 client to .NET server (WCF/REST/HTTP/?)?

    - by Bobby
    Simply stated: I'm trying to record audio in a browser, and get that data back up to the server. I originally tried to capture, encode and upload the audio using Silverlight, but because of the lack of suitable client-side encoding options, I'm now giving Flash a shot (Flash has baked-in support for encoding to Speex). I think I've figured out how to capture and encode the audio... But now what was easy in Silverlight, is the challenge in Flash. My server-side is .NET: MVC2- I'm open to receiving the audio in whatever manner is best- REST, WCF.. So that's my question: How could one upload binary data from Flash, to a .NET server-side endpoint. If the answer is WCF: then how would one setup the client-side proxies to communicate with the service? If the answer is REST or HTTP Post, then how would one construct this HTTP request and pass along the data? I've been reading up on AS3, but am new to Flash dev... Thanks for any help!

    Read the article

  • MPlayer does not work

    - by Soham Pal
    Using the xubuntu desktop, on Ubuntu Raring updated from Quantal. MPlayer never really worked. No video, no audio, nothing. I really can't be any more helpful, so here's the log: petey@home-pc:~$ mplayer "/home/petey/Downloads/Polar Bear Cafe (480p)HorribleSubs]/[HorribleSubs] Polar Bear Cafe - 01 [480p].mkv" MPlayer SVN-r35984-4.7 (C) 2000-2013 MPlayer Team Playing /home/petey/Downloads/Polar Bear Cafe (480p)[HorribleSubs]/[HorribleSubs] Polar Bear Cafe - 01 [480p].mkv. libavformat version 55.0.100 (internal) libavformat file format detected. [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (aac), -aid 0 [lavf] stream 2: subtitle (ass), -sid 0 VIDEO: [H264] 848x480 0bpp 23.810 fps 0.0 kbps ( 0.0 kbyte/s) Clip info: creation_time: 2012-04-05 21:36:10 Load subtitles in /home/petey/Downloads/Polar Bear Cafe (480p)[HorribleSubs]/ Can't open /dev/fb0: Permission denied [fbdev2] Can't open /dev/fb0: Permission denied VO: [v4l2] No such file or directory vo_cvidix: No vidix driver name provided, probing available ones (-v option for details)! [cyberblade] Error occurred during pci scan: Operation not permitted [mach64] Error occurred during pci scan: Operation not permitted [mga] Error occurred during pci scan: Operation not permitted [mga] Error occurred during pci scan: Operation not permitted [nvidia_vid] Error occurred during pci scan: Operation not permitted [pm3] Error occurred during pci scan: Operation not permitted [radeon] Error occurred during pci scan: Operation not permitted [rage128] Error occurred during pci scan: Operation not permitted [s3_vid] Error occurred during pci scan: Operation not permitted [SiS] Error occurred during pci scan: Operation not permitted [unichrome] Error occurred during pci scan: Operation not permitted [VO_SUB_VIDIX] Couldn't find working VIDIX driver. ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family libavcodec version 55.0.100 (internal) Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, floatle, 0.0 kbit/0.00% (ratio: 0->352800) Selected audio codec: [ffaac] afm: ffmpeg (FFmpeg AAC (MPEG-2/MPEG-4 Audio)) ========================================================================== [AO OSS] audio_setup: Can't open audio device /dev/dsp: No such file or directory DVB card number must be between 1 and 4 AO: [null] 44100Hz 2ch floatle (4 bytes per sample) Starting playback... Movie-Aspect is 1.78:1 - prescaling to correct movie aspect. VO: [null] 848x480 = 854x480 Planar YV12 A: 4.7 V: 4.7 A-V: 0.002 ct: 0.083 0/ 0 22% 0% 0.5% 0 0 MPlayer interrupted by signal 2 in module: sleep_timer A: 4.7 V: 4.7 A-V: 0.001 ct: 0.083 0/ 0 21% 0% 0.5% 0 0 Exiting... (Quit)

    Read the article

  • alsa - sound issues on ubuntu 12.04

    - by tam_ubuuser
    i am having an sony E series laptop.i have an HDMI port .at this stage ,i have tested my sound card , which provides audio out on my laptop i.e i could hear songs .my laptop has two sound cards amd 5450 and an intel-hda(alsamixer shows that as s/pdif) . i decided to connect HDMI output to my new HD-TV.but, i could get only visuals on my TV,NO AUDIO OUTPUT ( HDMI cable works fine with win 7).my laptop has two sound cards.but i couldn't switch output to other card.( i don't know ,how to do that) i decided to update alsa. complied the following code in terminal. sudo apt-add-repository ppa:ubuntu-audio-dev/alsa-daily sudo apt-get update sudo apt-get install alsa-hda-dkms then,strangely no login sound, and no audio output on my laptop at all .then, started complied code from step1 sound troubleshooting procedure from offical ubuntu site.then, my speaker icon taskbar disappeared .obivously $aplay -l ,provided output as no soundcards detected . so , i implemented step 4 from that guide, it provides a output of all hardware devices in my laptop. *-multimedia UNCLAIMED description: Audio device product: Cedar HDMI Audio [Radeon HD 5400/6300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:f0040000-f0043fff *-multimedia UNCLAIMED description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f5e00000-f5e03fff that command displayed output name of the two cards . but , still i have no positive output on $aplay -l. so therfore, i think alsa couldn't detect my sound cards . is there solution to this problem? it could be better,if alsa would channel output from multiple sound cards ? how should install and configure alsa such that detects HDMI cable as soon i connect to my HD tv? is it possible to alsa and pluseaudio 2.0 to co-exist, if so how?

    Read the article

  • Text Decoding Problem

    - by Jason Miesionczek
    So given this input string: =?ISO-8859-1?Q?TEST=2C_This_Is_A_Test_of_Some_Encoding=AE?= And this function: private string DecodeSubject(string input) { StringBuilder sb = new StringBuilder(); MatchCollection matches = Regex.Matches(inputText.Text, @"=\?(?<encoding>[\S]+)\?.\?(?<data>[\S]+[=]*)\?="); foreach (Match m in matches) { string encoding = m.Groups["encoding"].Value; string data = m.Groups["data"].Value; Encoding enc = Encoding.GetEncoding(encoding.ToLower()); if (enc == Encoding.UTF8) { byte[] d = Convert.FromBase64String(data); sb.Append(Encoding.ASCII.GetString(d)); } else { byte[] bytes = Encoding.Default.GetBytes(data); string decoded = enc.GetString(bytes); sb.Append(decoded); } } return sb.ToString(); } The result is the same as the data extracted from the input string. What am i doing wrong that this text is not getting decoded properly?

    Read the article

  • why some mp3s on mime_content_type return application/octet-stream

    - by robertdd
    why on some mp3s file when i call mime_content_type($mp3_file_path) it's return application/octet-stream? i have this: if (!empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $image = getimagesize($tempFile); $mp3_mimes = array('audio/mpeg', 'audio/x-mpeg', 'audio/mp3', 'audio/x-mp3', 'audio/mpeg3', 'audio/x-mpeg3', 'audio/mpg', 'audio/x-mpg', 'audio/x-mpegaudio'); if (in_array(mime_content_type($tempFile), $mp3_mimes)) { echo json_encode("mp3"); } elseif ($image['mime']=='image/jpeg') { echo json_encode("jpg"); } else{ echo json_encode("error"); } }

    Read the article

  • Where can I buy a Stereo audio to 3.5mm adapter?

    - by iftrue
    I need a stereo (6.33mm) to PC audio (3.5mm) adapter, and I'd like it to have an inch or two of cable so that yanking the connector doesn't break the audio port the 3.5mm is plugged into. I used to own one of these, but I lost the adapter. Where can I buy something like this online? I can only find solid adapters or 25' cables.

    Read the article

  • Win32: What is the status of chunked encoding support in WinHttpReadData?

    - by Cheeso
    The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding: Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application. Can anyone decipher this? Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server. Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability. Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request? Between the lines, is this documentation saying that I need to link against the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified? This might read like a rant, but it's not. I truly want to know.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >