Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 150/173 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • Problem in using a second call to send() in C

    - by Paulo Victor
    Hello. Right now I'm working in a simple Server that receives from client a code referring to a certain operation. The server receives this data and send back the signal that it's waiting for the proper data. /*Server Side*/ if (codigoOperacao == 0) { printf("A escolha foi 0\n"); int bytesSent = SOCKET_ERROR; char sendBuff[1080] = "0"; /*Here "send" returns an error msgm while trying to send back the signal*/ bytesSent = send(socketEscuta, sendBuff, 1080, 0); if (bytesSent == SOCKET_ERROR) { printf("Erro ao enviar"); return 0; } else { printf("Bytes enviados : %d\n", bytesSent); char structDesmontada[1080] = ""; bytesRecv = recebeMensagem(socketEscuta, structDesmontada); printf("structDesmontada : %s", structDesmontada); } } Following here is the client code responsible for sending the operation code and receiving the signal char sendMsg[1080] = "0"; char recvMsg[1080] = ""; bytesSent = send(socketCliente, sendMsg, sizeof(sendMsg), 0); printf("Enviei o codigo (%d)\n", bytesSent); /*Here the program blocks in a infinite loop since the server never send anything*/ while (bytesRecv == SOCKET_ERROR) { bytesRecv = recv(socketCliente, recvMsg, 1080, 0); if (bytesRecv > 0) { printf("Recebeu\n"); } Why this is happening only in the second attempt to send some data? Because the first call to send() works fine. Hope someone can help!! Thnks

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • Is there a fundamental difference between malloc and HeapAlloc (aside from the portability)?

    - by Lambert
    Hi, I'm having code that, for various reasons, I'm trying to port from the C runtime to one that uses the Windows Heap API. I've encountered a problem: If I redirect the malloc/calloc/realloc/free calls to HeapAlloc/HeapReAlloc/HeapFree (with GetProcessHeap for the handle), the memory seems to be allocated correctly (no bad pointer returned, and no exceptions thrown), but the library I'm porting says "failed to allocate memory" for some reason. I've tried this both with the Microsoft CRT (which uses the Heap API underneath) and with another company's run-time library (which uses the Global Memory API underneath); the malloc for both of those works well with the library, but for some reason, using the Heap API directly doesn't work. I've checked that the allocations aren't too big (= 0x7FFF8 bytes), and they're not. The only problem I can think of is memory alignment; is that the case? Or other than that, is there a fundamental difference between the Heap API and the CRT memory API that I'm not aware of? If so, what is it? And if not, then why does the static Microsoft CRT (included with Visual Studio) take some extra steps in malloc/calloc before calling HeapAlloc? I'm suspecting there's a difference but I can't think of what it might be. Thank you!

    Read the article

  • Strange results from OdbcDataReader reading Sqlite DB

    - by stout
    This method returns some strange results, and was wondering if someone could explain why this is happening, and possibly a solution to get my desired results. Results: FileName = what I'd expect FileSize = what I'd expect Buffer = all bytes = 0 BytesRead = 0 BlobString = string of binary data FieldType = BLOB (what I'd expect) ColumnType = System.String Furthermore, if the file is greater than a few KB, the reader throws an exception stating the StringBuilder capacity argument must be greater than zero (presummably because the size is greater than Int32.MaxValue). I guess my question is how does one properly read large BLOBs from an OdbcDataReader? public static String SaveBinaryFile(String Key) { try { Connect(); OdbcCommand Command = new OdbcCommand("SELECT [_filename_],[_filesize_],[_content_] FROM [_sys_content] WHERE [_key_] = '" + Key + "';", Connection); OdbcDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess); if (Reader.HasRows == false) return null; String FileName = Reader.GetString(0); int FileSize = int.Parse(Reader.GetString(1)); byte[] Buffer = new byte[FileSize]; long BytesRead = Reader.GetBytes(2, 0, Buffer, 0, FileSize); String BlobString = (String)Reader["_content_"]; String FieldType = Reader.GetDataTypeName(2); Type ColumnType = Reader.GetFieldType(2); return null; } catch (Exception ex) { Tools.ErrorHandler.Catch(ex); return null; } }

    Read the article

  • C++ casted realloc causing memory leak

    - by wyatt
    I'm using a function I found here to save a webpage to memory with cURL: struct WebpageData { char *pageData; size_t size; }; size_t storePage(void *input, size_t size, size_t nmemb, void *output) { size_t realsize = size * nmemb; struct WebpageData *page = (struct WebpageData *)output; page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); if(page->pageData) { memcpy(&(page->pageData[page->size]), input, realsize); page->size += realsize; page->pageData[page->size] = 0; } return realsize; } and find the line: page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); is causing a memory leak of a few hundred bytes per call. The only real change I've made from the original source is casting the line in question to a (char *), which my compiler (gcc, g++ specifically if it's a c/c++ issue, but gcc also wouldn't compile with the uncast statement) insisted upon, but I assume this is the source of the leak. Can anyone elucidate? Thanks

    Read the article

  • Orbited exception Data must not be unicode.

    - by Sid
    I am working with orbited and once I switch on orbited in production mode it throws the following error on my screen -- <exception caught here> --- File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 150, in process self.render(resrc) File "/usr/lib/python2.6/dist-packages/twisted/web/server.py", line 157, in render body = resrc.render(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 21, in render self.conn.transportOpened(self) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/cometsession.py", line 322, in transportOpened self.cometTransport.flush() File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/base.py", line 45, in flush self.write(self.packets) File "/usr/local/lib/python2.6/dist-packages/orbited-0.7.10-py2.6.egg/orbited/transports/htmlfile.py", line 42, in write self.request.write(payload); File "/usr/lib/python2.6/dist-packages/twisted/web/http.py", line 862, in write self.transport.write(data) File "/usr/lib/python2.6/dist-packages/twisted/internet/tcp.py", line 420, in write abstract.FileDescriptor.write(self, bytes) File "/usr/lib/python2.6/dist-packages/twisted/internet/abstract.py", line 170, in write raise TypeError("Data must not be unicode") exceptions.TypeError: Data must not be unicode I have absolutely no clue as to what could be the problem. Could anyone point me in the right direction.

    Read the article

  • Formulae for U and V buffer offset

    - by Abhi
    Hi all ! What should be the buffer offset value for U & V in YUV444 format type? Like for an example if i am using YV12 format the value is as follows: ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; iInputHeight = 160 & iInputWidth = 112 ppdata is an object for the following structure: typedef struct ppConfigDataStruct { //--------------------------------------------------------------- // General controls //--------------------------------------------------------------- UINT8 IntType; // FIRSTMODULE_INTERRUPT: the interrupt will be // rised once the first sub-module finished its job. // FRAME_INTERRUPT: the interrput will be rised // after all sub-modules finished their jobs. //--------------------------------------------------------------- // Format controls //--------------------------------------------------------------- // For input idmaChannel inputIDMAChannel; BOOL bCombineEnable; idmaChannel inputcombIDMAChannel; UINT8 inputcombAlpha; UINT32 inputcombColorkey; icAlphaType alphaType; // For output idmaChannel outputIDMAChannel; CSCEQUATION CSCEquation; // Selects R2Y or Y2R CSC Equation icCSCCoeffs CSCCoeffs; // Selects R2Y or Y2R CSC Equation icFlipRot FlipRot; // Flip/Rotate controls for VF BOOL allowNopPP; // flag to indicate we need a NOP PP processing }*pPpConfigData, ppConfigData; and idmaChannel structure is as follows: typedef struct idmaChannelStruct { icFormat FrameFormat; // YUV or RGB icFrameSize FrameSize; // frame size UINT32 LineStride;// stride in bytes icPixelFormat PixelFormat;// Input frame RGB format, set NULL // to use standard settings. icDataWidth DataWidth;// Bits per pixel for RGB format UINT32 UBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format UINT32 VBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format } idmaChannel, *pIdmaChannel; I want the formulae for ppData.inputIDMAChannel.UBufOffset & ppData.inputIDMAChannel.VBufOffset for YUV444 Thanks in advance

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • jQuery ajax post of jpg image to .net webservice. Image results corrupted

    - by sosergio
    I have a phonegap jquery app that opens the camera and take a picture. I then POST this picture to a .net webservice, wich I've coded. I can't use phonegap FileTransfer because such isn't supported by Bada os, wich is a requirement. I believe I've successfully loaded the image from phonegap FileSystem API, I've attached it into an .ajax type:post, I've even received it from .net side, but when .net save the image into the server, the image results corrupted. It seems to me that two sides of the communication have different data type. Has anyone experience in this? Any help will be appreciated. This is my code: //PHONEGAP CAMERA ACCESS (summed up) navigator.camera.getPicture(onGetPictureSuccess, onGetPictureFail, { quality: 50, destinationType:Camera.DestinationType.FILE_URI }); window.resolveLocalFileSystemURI(imageURI, onResolveFileSystemURISuccess, onResolveFileSystemURIError); fileEntry.file(gotFileSuccess, gotFileError); new FileReader().readAsDataURL(file); //UPLOAD FILE function onDataReadSuccess(evt) { var image_data = evt.target.result; var filename = unique_id(); var filext = "jpg"; $.ajax({ type : 'POST', url : SERVICE_BASE_URL+"/fotos/"+filename+"?ext="+filext, cache: false, timeout: 100000, processData: false, data: image_data, contentType: 'image/jpeg', success : function(data) { console.log("Data Uploaded with success. Message: "+ data); $.mobile.hidePageLoadingMsg(); $.mobile.changePage("ok.html"); } }); } On my .net Web Service this is the method that gets invoked: public string FotoSave(string filename, string extension, Stream fileContent) { string filePath = HttpContext.Current.Server.MapPath("~/foto_data/") + "\\" + filename; FileStream writeStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Write); int Length = 256; Byte[] buffer = new Byte[Length]; int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); }

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • can't read XML via PHP

    - by jasmine
    I can't find the reason, only see the following error message. Input is not proper UTF-8, indicate encoding ! Bytes: 0x00 0x5D 0x5D 0x3E the followings are my php code $reader2 = new XMLReader(); $reader2->XML($xmlstring); $user_data=""; while ($reader2->read()) { if ($reader2->name == "user_id" && $reader2->nodeType == XMLReader::ELEMENT) { $reader2->read(); $user_data .=$reader2->value; } } $reader2->close(); The followings are XML data <?xml version="1.0" encoding="UTF-8" ?> <SOAP:Envelope xmlns:SOAP="http://www.w3.org/2003/05/soap-envelope" > <SOAP:Body > <user_id>1234567890</user_id> <greeting_name><![CDATA[ABCDEF ..yl/?]]></greeting_name> </SOAP:Body> </SOAP:Envelope> I try a lot of ways,but still can't find the solution. the greeting tag value may be chinese or English words.

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • PDFParsing & extracting the images only in iPhone application.

    - by sagar
    Hello - Every one. ** : My Query : ** I want to extract only images from entire pdf document. ( Using Objective C - for iPhone Application ) : My Efforts : I have gone through this link which has details regarding different operators of PDF Document. ( http://mail-archives.apache.org/mod_mbox/pdfbox-commits/200912.mbox/%[email protected]%3E ) I also studied this document - ( http://developer.apple.com/mac/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_pdf_scan/dq_pdf_scan.html#//apple_ref/doc/uid/TP30001066-CH220-TPXREF101 ) I also have gone through entire document of PDFReference.pdf ( From original Adobe Site ) PDFReference.pdf (Adobe Document - says that - for image there are following operators ) q Q BI EI I have placed following table get the image myTable = CGPDFOperatorTableCreate(); CGPDFOperatorTableSetCallback(myTable, "q", arrayCallback2); CGPDFOperatorTableSetCallback(myTable, "TJ", arrayCallback); CGPDFOperatorTableSetCallback(myTable, "Tj", stringCallback); I have following arrayCallback2 method for getting image void arrayCallback2(CGPDFScannerRef inScanner, void *userInfo) { // how to extract image from this code // means I have tried too many different ways. following is incorrect way & not giving image // CGPDFStreamRef stream; // represents a sequence of bytes // if (CGPDFDictionaryGetStream (d, "BI", &stream)){ // CGPDFDataFormat t=CGPDFDataFormatJPEG2000; // CFDataRef data = CGPDFStreamCopyData (stream, &t); // } } above arrayCallback2 method is called for operator "q", But I don't know How to extract the image from it. In short. What should be the solution for extracting the images from the pdf documents? Thanks in advance for your kind help. Sagar kothari.

    Read the article

  • Bluetooth txt file byte increases????

    - by cheesebunz
    Hi everyone, i am able to xfer files from 1 mobile device to another. When the sender sends this text file of 8 bytes, the receiver end will become a 256bytes txt file and when i open the contents of the txt file, there are my infos plus alot of square boxes. Here is my code from the sender: string fileName = @"SendTest.txt"; System.Uri uri = new Uri("obex://" + selectedAddr + "/" + System.IO.Path.GetFileName(fileName)); ObexWebRequest request = new ObexWebRequest(uri); Stream requestStream = request.GetRequestStream(); FileStream fs = File.OpenRead(fileName); byte[] buffer = new byte[1024]; int readBytes = 1; while (readBytes != 0) { readBytes = fs.Read(buffer,0, buffer.Length); requestStream.Write(buffer,0, readBytes); } requestStream.Close(); ObexWebResponse response = (ObexWebResponse)request.GetResponse(); MessageBox.Show(response.StatusCode.ToString()); response.Close(); Any1 knws how do i solve it?

    Read the article

  • How to properly set JavaMail timeout setting

    - by user286149
    I am using JavaMail to connect to a POP3 server. Further, I set the following properties, so that JavaMail won't wait to long if an email server doesn't respond: props.setProperty("mail.pop3.connectionpooltimeout", "3000"); props.setProperty("mail.pop3.connectiontimeout", "3000"); props.setProperty("mail.pop3.timeout", "3000"); However, in some cases the timeout works properly but sometimes JavaMail freezes for minutes(!) with the following debug message: DEBUG POP3: connecting to host "pop3.yahoo.com", port 110, isSSL false Changing ports or protocols (SSL, TLS..) has no effect. I assume that the host simply doesn't exist. For example, if I poll pop3.yahoo.com instead of pop.mail.yahoo.com (which would be the right host name), I have to wait very long til a timeout exception occurs. After several minutes, I get the following exception and the application continues to run: java.net.ConnectException: Operation timed out pop3.yahoo.com seems to exist but won't respond: localhost:~ me$ ping pop3.yahoo.com PING pop3.yahoo.com (206.190.46.10): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 ^C You might be asking why I use pop3.yahoo.com instead of pop.mail.yahoo.com. Well, I simply wanted to test what happens if the user of my application inserts a wrong host name. I believe that this issue is related to this report http://www.opensubscriber.com/message/[email protected]/180946.html where the poster claims that the problem occurs if the email server closes the connection. JavaMail then seems to wait very long (don't know why). Since the issue wasn't resolved in the link I posted: Does somebody know how to fix or at least debug this? Any help would be really appreciated!

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • pointer in c and the c program

    - by sandy101
    Hello, I am studying the pointer and i come to this program .... #include <stdio.h> void swap(int *,int *); int main() { int a=10; int b=20; swap(&a,&b); printf("the value is %d and %d",a,b); return 0; } void swap(int *a,int*b) { int t; t=*a; *a=*b; *b=t; printf("%d and%d\n",*a,*b); } can any one tell me why this main function return the value reversed . The thing i understood till now is that the function call in c does not affect the main function and it's values . I also want to know how much the space a pointer variable occupied like integer have occupied the 2 bytes and the various application use and advantages of the pointer .... plz.... anyone help

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Integer array or struct array - which is better?

    - by MusiGenesis
    In my app, I'm storing Bitmap data in a two-dimensional integer array (int[,]). To access the R, G and B values I use something like this: // read: int i = _data[x, y]; byte B = (byte)(i >> 0); byte G = (byte)(i >> 8); byte R = (byte)(i >> 16); // write: _data[x, y] = BitConverter.ToInt32(new byte[] { B, G, R, 0 }, 0); I'm using integer arrays instead of an actual System.Drawing.Bitmap because my app runs on Windows Mobile devices where the memory available for creating bitmaps is severely limited. I'm wondering, though, if it would make more sense to declare a structure like this: public struct RGB { public byte R; public byte G; public byte B; } ... and then use an array of RGB instead of an array of int. This way I could easily read and write the separate R, G and B values without having to do bit-shifting and BitConverter-ing. I vaguely remember something from days of yore about byte variables being block-aligned on 32-bit systems, so that a byte actually takes up 4 bytes of memory instead of just 1 (but maybe this was just a Visual Basic thing). Would using an array of structs (like the RGB example` above) be faster than using an array of ints, and would it use 3/4 the memory or 3 times the memory of ints?

    Read the article

  • Native Endians and Auto Conversion

    - by KnickerKicker
    so the following converts big endians to little ones uint32_t ntoh32(uint32_t v) { return (v << 24) | ((v & 0x0000ff00) << 8) | ((v & 0x00ff0000) >> 8) | (v >> 24); } works. like a charm. I read 4 bytes from a big endian file into char v[4] and pass it into the above function as ntoh32 (* reinterpret_cast<uint32_t *> (v)) that doesn't work - because my compiler (VS 2005) automatically converts the big endian char[4] into a little endian uint32_t when I do the cast. AFAIK, this automatic conversion will not be portable, so I use uint32_t ntoh_4b(char v[]) { uint32_t a = 0; a |= (unsigned char)v[0]; a <<= 8; a |= (unsigned char)v[1]; a <<= 8; a |= (unsigned char)v[2]; a <<= 8; a |= (unsigned char)v[3]; return a; } yes the (unsigned char) is necessary. yes it is dog slow. there must be a better way. anyone ?

    Read the article

  • How to avoid the following purify detected memory leak in C++?

    - by Abhijeet
    Hi, I am getting the following memory leak.Its being probably caused by std::string. how can i avoid it? PLK: 23 bytes potentially leaked at 0xeb68278 * Suppressed in /vobs/ubtssw_brrm/test/testcases/.purify [line 3] * This memory was allocated from: malloc [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] operator new(unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] operator new(unsigned) [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_S_create(unsigned, unsigned, std::allocator<char> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/ x86/586/target/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_M_clone(std::allocator<char> const&, unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/tar get/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::string<char, std::char_traits<char>, std::allocator<char>>(std::string<char, std::char_traits<char>, std::alloc ator<char>> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] uec_UEDir::getEntryToUpdateAfterInsertion(rcapi_ImsiGsmMap const&, rcapi_ImsiGsmMap&, std::_Rb_tree_iterator<std::pair<std::string<char, std::char_traits<char>, std::allocator< char>> const, UEDirData >>&) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:2278] uec_UEDir::addUpdate(rcapi_ImsiGsmMap const&, LocalUEDirInfo&, rcapi_ImsiGsmMap&, int, unsigned char) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:282] ucx_UEDirHandler::addUpdateUEDir(rcapi_ImsiGsmMap, UEDirUpdateType, acap_PresenceEvent) [/vobs/ubtssw_brrm/ucx/linux-x86/../src/ucx_UEDirHandler.cc:374]

    Read the article

  • Struts 2 discard cache header

    - by Dewfy
    I have strange discarding behavior of struts2 while setting cache option for my image. I'm trying to put image from db to be cached on client side To render image I use ( http://struts.apache.org/2.x/docs/how-can-we-display-dynamic-or-static-images-that-can-be-provided-as-an-array-of-bytes.html ) where special result type render as follow: public void execute(ActionInvocation invocation) throws Exception { ...//some preparation HttpServletResponse response = ServletActionContext.getResponse(); HttpServletRequest request = ServletActionContext.getRequest(); ServletOutputStream os = response.getOutputStream(); try { byte[] imageBytes = action.getImage(); response.setContentType("image/gif"); response.setContentLength(imageBytes.length); //I want cache up to 10 min Date future = new Date(((new Date()).getTime() + 1000 * 10*60l)); ; response.addDateHeader("Expires", future.getTime()); response.setHeader("Cache-Control", "max-age=" + 10*60 + ""); response.addHeader("cache-Control", "public"); response.setHeader("ETag", request.getRequestURI()); os.write(imageBytes); } catch(Exception e) { response.sendError(HttpServletResponse.SC_NOT_FOUND); } os.flush(); os.close(); } But when image is embedded to page it is always reloaded (Firebug shows code 200), and neither Expires, nor max-age are presented in header Host localhost:9090 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://localhost:9090/web/result?matchId=1 Cookie JSESSIONID=4156BEED69CAB0B84D950932AB9EA1AC; If-None-Match /web/_srv/teamcolor Cache-Control max-age=0 I have no idea why it is dissapered, may be problem in url? It is forms with parameter: http://localhost:9090/web/_srv/teamcolor?loginId=3

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >