Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 149/1118 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page? I already tried adding something like this: <add extension="RetrieveBlob.aspx" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:00:30" varyByQueryString="assetId, assetFileId" /> But it doesn't work. An example of an url is: http://{server}/{application}/trunk/RetrieveBlob.aspx?assetId=31809&assetFileId=11829

    Read the article

  • Help needed with AES between Java and Objective-C (iPhone)....

    - by Simon Lee
    I am encrypting a string in objective-c and also encrypting the same string in Java using AES and am seeing some strange issues. The first part of the result matches up to a certain point but then it is different, hence when i go to decode the result from Java onto the iPhone it cant decrypt it. I am using a source string of "Now then and what is this nonsense all about. Do you know?" Using a key of "1234567890123456" The objective-c code to encrypt is the following: NOTE: it is a NSData category so assume that the method is called on an NSData object so 'self' contains the byte data to encrypt. - (NSData *)AESEncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES128+1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [self length]; //See the doc: For block ciphers, the output size will always be less than or //equal to the input size plus the size of one block. //That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES128, NULL /* initialization vector (optional) */, [self bytes], dataLength, /* input */ buffer, bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { //the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted]; } free(buffer); //free the buffer; return nil; } And the java encryption code is... public byte[] encryptData(byte[] data, String key) { byte[] encrypted = null; Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); byte[] keyBytes = key.getBytes(); SecretKeySpec keySpec = new SecretKeySpec(keyBytes, "AES"); try { Cipher cipher = Cipher.getInstance("AES/ECB/PKCS7Padding", "BC"); cipher.init(Cipher.ENCRYPT_MODE, keySpec); encrypted = new byte[cipher.getOutputSize(data.length)]; int ctLength = cipher.update(data, 0, data.length, encrypted, 0); ctLength += cipher.doFinal(encrypted, ctLength); } catch (Exception e) { logger.log(Level.SEVERE, e.getMessage()); } finally { return encrypted; } } The hex output of the objective-c code is - 7a68ea36 8288c73d f7c45d8d 22432577 9693920a 4fae38b2 2e4bdcef 9aeb8afe 69394f3e 1eb62fa7 74da2b5c 8d7b3c89 a295d306 f1f90349 6899ac34 63a6efa0 and the java output is - 7a68ea36 8288c73d f7c45d8d 22432577 e66b32f9 772b6679 d7c0cb69 037b8740 883f8211 748229f4 723984beb 50b5aea1 f17594c9 fad2d05e e0926805 572156d As you can see everything is fine up to - 7a68ea36 8288c73d f7c45d8d 22432577 I am guessing I have some of the settings different but can't work out what, I tried changing between ECB and CBC on the java side and it had no effect. Can anyone help!? please....

    Read the article

  • reading a string with spaces with sscanf

    - by SDLFunTimes
    For a project I'm trying to read an int and a string from a string. The only problem is sscanf appears to break reading an %s when it sees a space. Is there anyway to get around this limitation? Here's an example of what I'm trying to do: #include<stdio.h> #include<stdlib.h> int main(int argc, char** argv) { int age; char* buffer; buffer = malloc(200 * sizeof(char)); sscanf("19 cool kid", "%d %s", &age, buffer); printf("%s is %d years old\n", buffer, age); return 0; } What it prints is: "cool is 19 years old" where I need "cool kid is 19 years old". Does anyone know how to fix this?

    Read the article

  • Why is Swing Parser's handleText not handling nested tags?

    - by Jim P
    I need to transform some HTML text that has nested tags to decorate 'matches' with a css attribute to highlight it (like firefox search). I can't just do a simple replace (think if user searched for "img" for example), so I'm trying to just do the replace within the body text (not on tag attributes). I have a pretty straightforward HTML parser that I think should do this: final Pattern pat = Pattern.compile(srch, Pattern.CASE_INSENSITIVE); Matcher m = pat.matcher(output); if (m.find()) { final StringBuffer ret = new StringBuffer(output.length()+100); lastPos=0; try { new ParserDelegator().parse(new StringReader(output.toString()), new HTMLEditorKit.ParserCallback () { public void handleText(char[] data, int pos) { ret.append(output.subSequence(lastPos, pos)); Matcher m = pat.matcher(new String(data)); ret.append(m.replaceAll("<span class=\"search\">$0</span>")); lastPos=pos+data.length; } }, false); ret.append(output.subSequence(lastPos, output.length())); return ret; } catch (Exception e) { return output; } } return output; My problem is, when I debug this, the handleText is getting called with text that includes tags! It's like it's only going one level deep. Anyone know why? Is there some simple thing I need to do to HTMLParser (haven't used it much) to enable 'proper' behavior of nested tags? PS - I figured it out myself - see answer below. Short answer is, it works fine if you pass it HTML, not pre-escaped HTML. Doh! Hope this helps someone else. <span>example with <a href="#">nested</a> <p>more nesting</p> </span> <!-- all this gets thrown together -->

    Read the article

  • Sending multipart response for downloads in Zend Framework

    - by takeshin
    I'm sending files in action helper for downloads (in parts if needed) like this: ... $response->sendHeaders(); $chunksize = 1 * (1024 * 1024); $bytesSent = 0; if ($httpRange) { fseek($file, $range); } while(!feof($file) && (!connection_aborted() && ($bytesSent < $newLength)) ) { $buffer = fread($file, $chunksize); // $response->appendBody($buffer); // this would be better print($buffer); flush(); $bytesSent += strlen($buffer); } fclose($file); I suspect that better way would be to make use of $response object instead of print. Which is the recommended way to send big response objects using Zend Framework?

    Read the article

  • how to send file via http with python

    - by ep45
    Hello, I have a problem. I use Apache with mod_wsgi and webpy, and when i send a file on http, a lot packets are lost. This is my code : web.header('Content-Type','video/x-flv') web.header('Content-length',sizeFile) f = file(FILE_PATH, 'rb') while True: buffer = f.read(4*1024) if buffer : yield buffer else : break f.close() What in my code is wrong ? thanks.

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • Put message after long idle time does not work

    - by Sydney
    I wrote a simple Java client using MQ v7 libraries (No JMS). I try to put a message in a queue using the following pattern: Put a message Wait for x minutes Put a message again It works but if the idle time is too long (between 5-7 minutes), I get the following error: MQJE001: An MQException occurred: Completion Code 2, Reason 2195 MQJE007: IO error reading message data Error occured during API call - reason code0 MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: An MQException occurred: Completion Code 2, Reason 2009 MQJE003: IO error transmitting message buffer MQJE001: Completion Code 2, Reason 2009 An MQSeries error occurred : Completion code 2 Reason code 2009 com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2009 at com.ibm.mq.MQQueue.put(MQQueue.java:1511) After reading several threads on the subject, this error is usually creating FDC dumps but I have nothing in the system and queue manager logs. The channel is a SVRCONN channel.

    Read the article

  • How to understand the output of gcc -S *.cpp?

    - by user198729
    A sample one as follows: .file "hw.cpp" .section .rdata,"dr" LC0: .ascii "Oh shit really bad~!\15\12\0" .text .align 2 .globl __Z3badv .def __Z3badv; .scl 2; .type 32; .endef __Z3badv: pushl %ebp movl %esp, %ebp subl $8, %esp movl $LC0, (%esp) call _printf leave ret .section .rdata,"dr" LC1: .ascii "WOW\0" .text .align 2 .globl __Z3foov .def __Z3foov; .scl 2; .type 32; .endef __Z3foov: pushl %ebp movl %esp, %ebp subl $4, %esp movl LC1, %eax movl %eax, -4(%ebp) movl $__Z3badv, 4(%ebp) leave ret .def ___main; .scl 2; .type 32; .endef .align 2 .globl _main .def _main; .scl 2; .type 32; .endef _main: pushl %ebp movl %esp, %ebp subl $8, %esp andl $-16, %esp movl $0, %eax addl $15, %eax addl $15, %eax shrl $4, %eax sall $4, %eax movl %eax, -4(%ebp) movl -4(%ebp), %eax call __alloca call ___main call __Z3foov movl $0, %eax leave ret .def _printf; .scl 2; .type 32; .endef Has anyone tried to understand it?

    Read the article

  • F# working with DataReader

    - by mamu
    let getBytesData x = let len = reader.GetBytes(1, int64 0, null, 0, 0); // Create a buffer to hold the bytes, and then // read the bytes from the DataTableReader. let buffer : byte array = [||] reader.GetBytes(1, int64 0, buffer, 0, int32 len) |> ignore buffer let retVal = List [ while reader.Read() do yield (reader.GetString(0), getBytesData reader, reader.GetDateTime(2)) ] I have above code to read bytes[] from datareader. getBytesData function takes reader and returns bytes[] from reader. everything works fine but it getBytesData function is working very non-functional way. Is there any way i can optimize in F#? Sorry for kind of question, but i have started a new project on F# to squeeze all juice out of it, so trying to get each line most optimal way

    Read the article

  • How do I minimize the number of changes between revisions with new doxygen output?

    - by Dirk Eddelbuettel
    A subversion repository contains the html, latex and man directories that doxygen generates from the source code. Even for small source code changes, new files are being generated with random names which makes for large changes in the version control system. Is there are way around this? How can I minimize the changesets between revisions while still including doxygen-generated documentation? Alternatively, how could I find which of the doxygen-genrated files are no longer being used and should be removed?

    Read the article

  • Read line and change the line that not consist of certain words and not end with dot

    - by igo
    I wanna read some text files in a folder line by line. for example of 1 txt : Fast and Effective Text Mining Using Linear-time Document Clustering Bjornar Larsen WORD2 Chinatsu Aone SRA International AK, Inc. 4300 Fair Lakes Cow-l Fairfax, VA 22033 {bjornar-larsen, WORD1 I wanna remove line that does not contain of words = word, word2, word3, and does not end with dot . so. from the example, the result will be : Bjornar Larsen WORD2 Chinatsu Aone SRA International, Inc. {bjornar-larsen, WORD1 I am confused, hw to remove the line? it that possible? or can we replace them with a space? here's the code : $url = glob($savePath.'*.txt'); foreach ($url as $file => $files) { $handle = fopen($files, "r") or die ('can not open file'); $ori_content= file_get_contents($files); foreach(preg_split("/((\r?\n)|(\r\n?))/", $ori_content) as $buffer){ $pos1 = stripos($buffer, $word1); $pos2 = stripos($buffer, $word2); $pos3 = stripos($buffer, $word3); $last = $str[strlen($buffer)-1];//read the las character if (true !== $pos1 OR true !== $pos2 OR true !==$pos3 && $last != '.'){ //how to remove } } } please help me, thank you so much :)

    Read the article

  • JSP Document/JSPX: what determines how space/linebreaks are removed in the output?

    - by NoozNooz42
    I've got a "JSP Document" ("JSP in XML") nicely formatted and when the webpage is generated and sent to the user, some linebreaks are removed. Now the really weird part: apparently the "main" .jsp always gets all its linebreak removed but for any subsequent .jsp included from the main .jsp, linebreaks seems to be randomly removed (some are there, others aren't). For example, if I'm looking at the webpage served from Firefox and ask to "view source", I get to see what is generated. So, what determines when/how linebreaks are kept/removed? This is just an example I made up... Can you force a .jsp to serve this: <body><div id="page"><div id="header"><div class="title">... or this: <body> <div id="page"> <div id="header"> <div class="title">... ? I take it that linebreaks are removed to save on bandwidth, but what if I want to keep them? And what if I want to keep the same XML indentation as in my .jsp file? Is this doable?

    Read the article

  • How to write mmap input memory to O_DIRECT output file?

    - by Friedrich
    why doesn't following pseudo-code work (O_DIRECT results in EFAULT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file", O_DIRECT); write(out_fd, in_mmap, PAGE_SIZE); while following does (no O_DIRECT) in_fd = open("/dev/mem"); in_mmap = mmap(in_fd); out_fd = open("/tmp/file"); write(out_fd, in_mmap, PAGE_SIZE); I guess it's something with virtual kernel pages to virtual user pages, which cannot be translated in the write call? Best regards, Friedrich

    Read the article

  • Iterating through String word at a time in Python

    - by AlgoMan
    I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ?

    Read the article

  • Reading a string in TASM x86 assembly

    - by I_S_W
    hi all , i am trying to read a string from user in TASM assembly , i know i need a buffer to hold the input , max. length and actual length , but i seem to forgot how exactly we declare a buffer my attempt was smth like this Buffer db 80 ;max length db ? ;actual length db 80 dup(0);i think here is my problem but can't remember the right format Thanks in advance

    Read the article

  • How to make php-closure compiler output to a predefined file name? (Updated)

    - by Mohammad
    Php-closure compiler (linked to source code) currently writes the compiled code to a md5 encoded filename. How can I make it so it write the compiled code to a predefined name like compiled_code.js? . . I think it has to do with the write() function on Line 164. It gets the filename via the _getCacheFileName() function (Line 272). function _getCacheFileName() { return $this->_cache_dir . $this->_getHash() . ".js"; } I've tried altering it to this: function _getCacheFileName() { //return $this->_cache_dir . $this->_getHash() . ".js"; return 'copiled_code.js'; } without any results. Thanks to all of you in advance!

    Read the article

  • Why am I getting unexpected output trying to write a hash structure to a file?

    - by Harm De Weirdt
    I have a hash in which I store the products a customer buys (%orders). It uses the product code as key and has a reference to an array with the other info as value. At the end of the program, I have to rewrite the inventory to the updated version (i.e. subtract the quantity of the bought items) This is how I do rewrite the inventory: sub rewriteInventory{ open(FILE,'>inv.txt'); foreach $key(%inventory){ print FILE "$key\|$inventory{$key}[0]\|$inventory{$key}[1]\|$inventory{$key}[2]\n" } close(FILE); } where $inventory{$key}[x] is 0 → Title, 1 → price, 2 → quantity. The problem here is that when I look at inv.txt afterwards, I see things like this: CD-911|Lady Gaga - The Fame|15.99|21 ARRAY(0x145030c)||| BOOK-1453|The Da Vinci Code - Dan Brown|14.75|12 ARRAY(0x145bee4)||| Where do these ARRAY(0x145030c)||| entries come from? Or more important, how do I get rid of them?

    Read the article

  • Serial port : Read data problem, not reading complete data

    - by Anuj Mehta
    Hi I have an application where I am sending data via serial port from PC1 (Java App) and reading that data in PC2 (C++ App). The problem that I am facing is that my PC2 (C++ App) is not able to read complete data sent by PC1 i.e. from my PC1 I am sending 190 bytes but PC2 is able to read close to 140 bytes though I am trying to read in a loop. Below is code snippet of my C++ App Open the connection to serial port serialfd = open( serialPortName.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); if (serialfd == -1) { /* * Could not open the port. */ TRACE << "Unable to open port: " << serialPortName << endl; } else { TRACE << "Connected to serial port: " << serialPortName << endl; fcntl(serialfd, F_SETFL, 0); } Configure the Serial Port parameters struct termios options; /* * Get the current options for the port... */ tcgetattr(serialfd, &options); /* * Set the baud rates to 9600... */ cfsetispeed(&options, B38400); cfsetospeed(&options, B38400); /* * 8N1 * Data bits - 8 * Parity - None * Stop bits - 1 */ options.c_cflag &= ~PARENB; options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; options.c_cflag |= CS8; /* * Enable hardware flow control */ options.c_cflag |= CRTSCTS; /* * Enable the receiver and set local mode... */ options.c_cflag |= (CLOCAL | CREAD); // Flush the earlier data tcflush(serialfd, TCIFLUSH); /* * Set the new options for the port... */ tcsetattr(serialfd, TCSANOW, &options); Now I am reading data const int MAXDATASIZE = 512; std::vector<char> m_vRequestBuf; char buffer[MAXDATASIZE]; int totalBytes = 0; fcntl(serialfd, F_SETFL, FNDELAY); while(1) { bytesRead = read(serialfd, &buffer, MAXDATASIZE); if(bytesRead == -1) { //Sleep for some time and read again usleep(900000); } else { totalBytes += bytesRead; //Add data read to vector for(int i =0; i < bytesRead; i++) { m_vRequestBuf.push_back(buffer[i]); } int newBytesRead = 0; //Now keep trying to read more data while(newBytesRead != -1) { //clear contents of buffer memset((void*)&buffer, 0, sizeof(char) * MAXDATASIZE); newBytesRead = read(serialfd, &buffer, MAXDATASIZE); totalBytes += newBytesRead; for(int j = 0; j < newBytesRead; j++) { m_vRequestBuf.push_back(buffer[j]); } }//inner while break; } //while

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >