Search Results

Search found 4304 results on 173 pages for 'bytes'.

Page 150/173 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • How to properly set JavaMail timeout setting

    - by user286149
    I am using JavaMail to connect to a POP3 server. Further, I set the following properties, so that JavaMail won't wait to long if an email server doesn't respond: props.setProperty("mail.pop3.connectionpooltimeout", "3000"); props.setProperty("mail.pop3.connectiontimeout", "3000"); props.setProperty("mail.pop3.timeout", "3000"); However, in some cases the timeout works properly but sometimes JavaMail freezes for minutes(!) with the following debug message: DEBUG POP3: connecting to host "pop3.yahoo.com", port 110, isSSL false Changing ports or protocols (SSL, TLS..) has no effect. I assume that the host simply doesn't exist. For example, if I poll pop3.yahoo.com instead of pop.mail.yahoo.com (which would be the right host name), I have to wait very long til a timeout exception occurs. After several minutes, I get the following exception and the application continues to run: java.net.ConnectException: Operation timed out pop3.yahoo.com seems to exist but won't respond: localhost:~ me$ ping pop3.yahoo.com PING pop3.yahoo.com (206.190.46.10): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 ^C You might be asking why I use pop3.yahoo.com instead of pop.mail.yahoo.com. Well, I simply wanted to test what happens if the user of my application inserts a wrong host name. I believe that this issue is related to this report http://www.opensubscriber.com/message/[email protected]/180946.html where the poster claims that the problem occurs if the email server closes the connection. JavaMail then seems to wait very long (don't know why). Since the issue wasn't resolved in the link I posted: Does somebody know how to fix or at least debug this? Any help would be really appreciated!

    Read the article

  • Boost ASIO async_write "Vector iterator not dereferencable"

    - by xeross
    Hey, I've been working on an async boost server program, and so far I've got it to connect. However I'm now getting a "Vector iterator not dereferencable" error. I suspect the vector gets destroyed or dereferenced before he packet gets sent thus causing the error. void start() { Packet packet; packet.setOpcode(SMSG_PING); send(packet); } void send(Packet packet) { cout << "DEBUG> Transferring packet with opcode " << packet.GetOpcode() << endl; async_write(m_socket, buffer(packet.write()), boost::bind(&Session::writeHandler, shared_from_this(), placeholders::error, placeholders::bytes_transferred)); } void writeHandler(const boost::system::error_code& errorCode, size_t bytesTransferred) { cout << "DEBUG> Transfered " << bytesTransferred << " bytes to " << m_socket.remote_endpoint().address().to_string() << endl; } Start gets called once a connection is made. packet.write() returns a uint8_t vector Would it matter if I'd change void send(Packet packet) to void send(Packet& packet) Not in relation to this problem but performance wise.

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • Something about Stream

    - by sforester
    I've been working on something that make use of streams and I found myself not clear about some stream concepts( you can also view another question posted by me at http://stackoverflow.com/questions/2933923/about-redirected-stdout-in-system-diagnostics-process ). 1.how do you indicate that you have finished writing a stream, writing something like a EOF? 2.follow the previous question, if I have written a EOF(or something like that) to a stream but didn't close the stream, then I want to write something else to the same stream, can I just start writing to it and no more set up required? 3.if a procedure tries to read a stream(like the stdin ) that no one has written anything to it, the reading procedure will be blocked,finally some data arrives and the procedure will just read till the writing is done,which is indicated by getting a return of 0 count of bytes read rather than being blocked, and now if the procedure issues another read to the same stream, it will still get a 0 count and return immediately while I was expecting it will be blocked since no one is writing to the stream now. So does the stream holds different states when the stream is opened but no one has written to it yet and when someone has finished a writing session? I'm using Windows the .net framework if there will by any thing platform specific. Thanks a lot!

    Read the article

  • How to store unlimited characters in Oracle 11g?

    - by vicky21
    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons. The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries. Is there any better solution to this problem? UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.

    Read the article

  • C++ casted realloc causing memory leak

    - by wyatt
    I'm using a function I found here to save a webpage to memory with cURL: struct WebpageData { char *pageData; size_t size; }; size_t storePage(void *input, size_t size, size_t nmemb, void *output) { size_t realsize = size * nmemb; struct WebpageData *page = (struct WebpageData *)output; page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); if(page->pageData) { memcpy(&(page->pageData[page->size]), input, realsize); page->size += realsize; page->pageData[page->size] = 0; } return realsize; } and find the line: page->pageData = (char *)realloc(page->pageData, page->size + realsize + 1); is causing a memory leak of a few hundred bytes per call. The only real change I've made from the original source is casting the line in question to a (char *), which my compiler (gcc, g++ specifically if it's a c/c++ issue, but gcc also wouldn't compile with the uncast statement) insisted upon, but I assume this is the source of the leak. Can anyone elucidate? Thanks

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • jQuery ajax post of jpg image to .net webservice. Image results corrupted

    - by sosergio
    I have a phonegap jquery app that opens the camera and take a picture. I then POST this picture to a .net webservice, wich I've coded. I can't use phonegap FileTransfer because such isn't supported by Bada os, wich is a requirement. I believe I've successfully loaded the image from phonegap FileSystem API, I've attached it into an .ajax type:post, I've even received it from .net side, but when .net save the image into the server, the image results corrupted. It seems to me that two sides of the communication have different data type. Has anyone experience in this? Any help will be appreciated. This is my code: //PHONEGAP CAMERA ACCESS (summed up) navigator.camera.getPicture(onGetPictureSuccess, onGetPictureFail, { quality: 50, destinationType:Camera.DestinationType.FILE_URI }); window.resolveLocalFileSystemURI(imageURI, onResolveFileSystemURISuccess, onResolveFileSystemURIError); fileEntry.file(gotFileSuccess, gotFileError); new FileReader().readAsDataURL(file); //UPLOAD FILE function onDataReadSuccess(evt) { var image_data = evt.target.result; var filename = unique_id(); var filext = "jpg"; $.ajax({ type : 'POST', url : SERVICE_BASE_URL+"/fotos/"+filename+"?ext="+filext, cache: false, timeout: 100000, processData: false, data: image_data, contentType: 'image/jpeg', success : function(data) { console.log("Data Uploaded with success. Message: "+ data); $.mobile.hidePageLoadingMsg(); $.mobile.changePage("ok.html"); } }); } On my .net Web Service this is the method that gets invoked: public string FotoSave(string filename, string extension, Stream fileContent) { string filePath = HttpContext.Current.Server.MapPath("~/foto_data/") + "\\" + filename; FileStream writeStream = new FileStream(filePath, FileMode.OpenOrCreate, FileAccess.Write); int Length = 256; Byte[] buffer = new Byte[Length]; int bytesRead = readStream.Read(buffer, 0, Length); // write the required bytes while (bytesRead > 0) { writeStream.Write(buffer, 0, bytesRead); bytesRead = readStream.Read(buffer, 0, Length); } readStream.Close(); writeStream.Close(); }

    Read the article

  • Bluetooth txt file byte increases????

    - by cheesebunz
    Hi everyone, i am able to xfer files from 1 mobile device to another. When the sender sends this text file of 8 bytes, the receiver end will become a 256bytes txt file and when i open the contents of the txt file, there are my infos plus alot of square boxes. Here is my code from the sender: string fileName = @"SendTest.txt"; System.Uri uri = new Uri("obex://" + selectedAddr + "/" + System.IO.Path.GetFileName(fileName)); ObexWebRequest request = new ObexWebRequest(uri); Stream requestStream = request.GetRequestStream(); FileStream fs = File.OpenRead(fileName); byte[] buffer = new byte[1024]; int readBytes = 1; while (readBytes != 0) { readBytes = fs.Read(buffer,0, buffer.Length); requestStream.Write(buffer,0, readBytes); } requestStream.Close(); ObexWebResponse response = (ObexWebResponse)request.GetResponse(); MessageBox.Show(response.StatusCode.ToString()); response.Close(); Any1 knws how do i solve it?

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • can't read XML via PHP

    - by jasmine
    I can't find the reason, only see the following error message. Input is not proper UTF-8, indicate encoding ! Bytes: 0x00 0x5D 0x5D 0x3E the followings are my php code $reader2 = new XMLReader(); $reader2->XML($xmlstring); $user_data=""; while ($reader2->read()) { if ($reader2->name == "user_id" && $reader2->nodeType == XMLReader::ELEMENT) { $reader2->read(); $user_data .=$reader2->value; } } $reader2->close(); The followings are XML data <?xml version="1.0" encoding="UTF-8" ?> <SOAP:Envelope xmlns:SOAP="http://www.w3.org/2003/05/soap-envelope" > <SOAP:Body > <user_id>1234567890</user_id> <greeting_name><![CDATA[ABCDEF ..yl/?]]></greeting_name> </SOAP:Body> </SOAP:Envelope> I try a lot of ways,but still can't find the solution. the greeting tag value may be chinese or English words.

    Read the article

  • Strange results from OdbcDataReader reading Sqlite DB

    - by stout
    This method returns some strange results, and was wondering if someone could explain why this is happening, and possibly a solution to get my desired results. Results: FileName = what I'd expect FileSize = what I'd expect Buffer = all bytes = 0 BytesRead = 0 BlobString = string of binary data FieldType = BLOB (what I'd expect) ColumnType = System.String Furthermore, if the file is greater than a few KB, the reader throws an exception stating the StringBuilder capacity argument must be greater than zero (presummably because the size is greater than Int32.MaxValue). I guess my question is how does one properly read large BLOBs from an OdbcDataReader? public static String SaveBinaryFile(String Key) { try { Connect(); OdbcCommand Command = new OdbcCommand("SELECT [_filename_],[_filesize_],[_content_] FROM [_sys_content] WHERE [_key_] = '" + Key + "';", Connection); OdbcDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess); if (Reader.HasRows == false) return null; String FileName = Reader.GetString(0); int FileSize = int.Parse(Reader.GetString(1)); byte[] Buffer = new byte[FileSize]; long BytesRead = Reader.GetBytes(2, 0, Buffer, 0, FileSize); String BlobString = (String)Reader["_content_"]; String FieldType = Reader.GetDataTypeName(2); Type ColumnType = Reader.GetFieldType(2); return null; } catch (Exception ex) { Tools.ErrorHandler.Catch(ex); return null; } }

    Read the article

  • Integer array or struct array - which is better?

    - by MusiGenesis
    In my app, I'm storing Bitmap data in a two-dimensional integer array (int[,]). To access the R, G and B values I use something like this: // read: int i = _data[x, y]; byte B = (byte)(i >> 0); byte G = (byte)(i >> 8); byte R = (byte)(i >> 16); // write: _data[x, y] = BitConverter.ToInt32(new byte[] { B, G, R, 0 }, 0); I'm using integer arrays instead of an actual System.Drawing.Bitmap because my app runs on Windows Mobile devices where the memory available for creating bitmaps is severely limited. I'm wondering, though, if it would make more sense to declare a structure like this: public struct RGB { public byte R; public byte G; public byte B; } ... and then use an array of RGB instead of an array of int. This way I could easily read and write the separate R, G and B values without having to do bit-shifting and BitConverter-ing. I vaguely remember something from days of yore about byte variables being block-aligned on 32-bit systems, so that a byte actually takes up 4 bytes of memory instead of just 1 (but maybe this was just a Visual Basic thing). Would using an array of structs (like the RGB example` above) be faster than using an array of ints, and would it use 3/4 the memory or 3 times the memory of ints?

    Read the article

  • Formulae for U and V buffer offset

    - by Abhi
    Hi all ! What should be the buffer offset value for U & V in YUV444 format type? Like for an example if i am using YV12 format the value is as follows: ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; iInputHeight = 160 & iInputWidth = 112 ppdata is an object for the following structure: typedef struct ppConfigDataStruct { //--------------------------------------------------------------- // General controls //--------------------------------------------------------------- UINT8 IntType; // FIRSTMODULE_INTERRUPT: the interrupt will be // rised once the first sub-module finished its job. // FRAME_INTERRUPT: the interrput will be rised // after all sub-modules finished their jobs. //--------------------------------------------------------------- // Format controls //--------------------------------------------------------------- // For input idmaChannel inputIDMAChannel; BOOL bCombineEnable; idmaChannel inputcombIDMAChannel; UINT8 inputcombAlpha; UINT32 inputcombColorkey; icAlphaType alphaType; // For output idmaChannel outputIDMAChannel; CSCEQUATION CSCEquation; // Selects R2Y or Y2R CSC Equation icCSCCoeffs CSCCoeffs; // Selects R2Y or Y2R CSC Equation icFlipRot FlipRot; // Flip/Rotate controls for VF BOOL allowNopPP; // flag to indicate we need a NOP PP processing }*pPpConfigData, ppConfigData; and idmaChannel structure is as follows: typedef struct idmaChannelStruct { icFormat FrameFormat; // YUV or RGB icFrameSize FrameSize; // frame size UINT32 LineStride;// stride in bytes icPixelFormat PixelFormat;// Input frame RGB format, set NULL // to use standard settings. icDataWidth DataWidth;// Bits per pixel for RGB format UINT32 UBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format UINT32 VBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format } idmaChannel, *pIdmaChannel; I want the formulae for ppData.inputIDMAChannel.UBufOffset & ppData.inputIDMAChannel.VBufOffset for YUV444 Thanks in advance

    Read the article

  • On C++ global operator new: why it can be replaced

    - by Jimmy
    I wrote a small program in VS2005 to test whether C++ global operator new can be overloaded. It can. #include "stdafx.h" #include "iostream" #include "iomanip" #include "string" #include "new" using namespace std; class C { public: C() { cout<<"CTOR"<<endl; } }; void * operator new(size_t size) { cout<<"my overload of global plain old new"<<endl; // try to allocate size bytes void *p = malloc(size); return (p); } int main() { C* pc1 = new C; cin.get(); return 0; } In the above, my definition of operator new is called. If I remove that function from the code, then operator new in C:\Program Files (x86)\Microsoft Visual Studio 8\VC\crt\src\new.cpp gets called. All is good. However, in my opinion, my implementations of operator new does NOT overload the new in new.cpp, it CONFLICTS with it and violates the one-definition rule. Why doesn't the compiler complain about it? Or does the standard say since operator new is so special, one-definition rule does not apply here? Thanks.

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Native Endians and Auto Conversion

    - by KnickerKicker
    so the following converts big endians to little ones uint32_t ntoh32(uint32_t v) { return (v << 24) | ((v & 0x0000ff00) << 8) | ((v & 0x00ff0000) >> 8) | (v >> 24); } works. like a charm. I read 4 bytes from a big endian file into char v[4] and pass it into the above function as ntoh32 (* reinterpret_cast<uint32_t *> (v)) that doesn't work - because my compiler (VS 2005) automatically converts the big endian char[4] into a little endian uint32_t when I do the cast. AFAIK, this automatic conversion will not be portable, so I use uint32_t ntoh_4b(char v[]) { uint32_t a = 0; a |= (unsigned char)v[0]; a <<= 8; a |= (unsigned char)v[1]; a <<= 8; a |= (unsigned char)v[2]; a <<= 8; a |= (unsigned char)v[3]; return a; } yes the (unsigned char) is necessary. yes it is dog slow. there must be a better way. anyone ?

    Read the article

  • Static Vs Non-Static Method Performance C#

    - by dotnetguts
    Hello All, I have few global methods declared in public class in my asp.net web application. I have habbit of declaring all global methods in public class in following format public static string MethodName(parameters) { } I want to know how it would impact on performance point of view? 1) Which one is Better? Static Method or Non-Static Method? 2) Reason why it is better? Following link shows Non-Static methods are good because, static methods are using locks to be Thread-safe. The always do internally a Monitor.Enter() and Monitor.exit() to ensure Thread-safety. http://bytes.com/topic/c-sharp/answers/231701-static-vs-non-static-function-performance And Following link shows Static Methods are good static methods are normally faster to invoke on the call stack than instance methods. There are several reasons for this in the C# programming language. Instance methods actually use the 'this' instance pointer as the first parameter, so an instance method will always have that overhead. Instance methods are also implemented with the callvirt instruction in the intermediate language, which imposes a slight overhead. Please note that changing your methods to static methods is unlikely to help much on ambitious performance goals, but it can help a tiny bit and possibly lead to further reductions. http://dotnetperls.com/static-method I am little confuse which one to use? Thanks

    Read the article

  • error in C++, what to do ?: could not find an match for ostream::write(long *, unsigned int)

    - by Shantanu Gupta
    I am trying to write data stored in a binary file using turbo C++. But it shows me an error could not find an match for ostream::write(long *, unsigned int) I want to write a 4 byte long data into that file. When i tries to write data using char pointer. It runs successfully. But i want to store large value i.e. eg. 2454545454 Which can be stored in long only. I dont know how to convert 1 byte into bit. I have 1 byte of data as a character. Moreover what i m trying to do is to convert 4 chars into long and store data into it. And at the other side i want to reverse this so as to retrieve how many bytes of data i have written. long *lmem; lmem=new long; *lmem=Tsize; fo.write(lmem,sizeof(long));// error occurs here delete lmem; I am implementing steganography and i have successfully stored txt file into image but trying to retrieve that file data now.

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • pointer in c and the c program

    - by sandy101
    Hello, I am studying the pointer and i come to this program .... #include <stdio.h> void swap(int *,int *); int main() { int a=10; int b=20; swap(&a,&b); printf("the value is %d and %d",a,b); return 0; } void swap(int *a,int*b) { int t; t=*a; *a=*b; *b=t; printf("%d and%d\n",*a,*b); } can any one tell me why this main function return the value reversed . The thing i understood till now is that the function call in c does not affect the main function and it's values . I also want to know how much the space a pointer variable occupied like integer have occupied the 2 bytes and the various application use and advantages of the pointer .... plz.... anyone help

    Read the article

  • asp file system object

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %<% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize % <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "" & sDir & "" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number < 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "Parent folder" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "" & objSubFolder.Name & "" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "" & objFile.Name & "" & sSize & "" & objFile.Type & "" & vbCRLF Next Response.Write "" % it works fine. but when i try to access something on shred path like: "\cvrdd0110:share" it gives error. how to access these files?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >