Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 2/52 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How much overhead does a msg_send call incur?

    - by pxl
    I'm attempting to piece together and run a list of tasks put together by a user. These task lists can be hundreds or thousand of items long. From what I know, the easiest and most obvious way would be to build an array and then iterate through them: NSArray *arrayOfTasks = .... init and fill with thousands of tasks for (id *eachTask in arrayOfTasks) { if ( eachTask && [eachTask respondsToSelector:@selector(execute)] ) [eachTask execute]; } For a desktop, this may be no problem, but for an iphone or ipad, this may be a problem. Is this a good way to go about it, or is there a faster way to accomplish the same thing? The reason why I'm asking about how much overhead a msg_send occurs is that I could also do a straight C implementation as well. For example, I could put together a linked list and use a block to handle the next task. Will I gain anything from that or is it really more trouble than its worth?

    Read the article

  • Where is the virtual function call overhead?

    - by Semen Semenych
    Hello everybody, I'm trying to benchmark the difference between a function pointer call and a virtual function call. To do this, I have written two pieces of code, that do the same mathematical computation over an array. One variant uses an array of pointers to functions and calls those in a loop. The other variant uses an array of pointers to a base class and calls its virtual function, which is overloaded in the derived classes to do absolutely the same thing as the functions in the first variant. Then I print the time elapsed and use a simple shell script to run the benchmark many times and compute the average run time. Here is the code: #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p) { return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) - ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec); } void function_not( double *d ) { *d = sin(*d); } void function_and( double *d ) { *d = cos(*d); } void function_or( double *d ) { *d = tan(*d); } void function_xor( double *d ) { *d = sqrt(*d); } void ( * const function_table[4] )( double* ) = { &function_not, &function_and, &function_or, &function_xor }; int main(void) { srand(time(0)); void ( * index_array[100000] )( double * ); double array[100000]; for ( long int i = 0; i < 100000; ++i ) { index_array[i] = function_table[ rand() % 4 ]; array[i] = ( double )( rand() / 1000 ); } struct timespec start, end; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); for ( long int i = 0; i < 100000; ++i ) { index_array[i]( &array[i] ); } clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end); unsigned long long time_elapsed = timespecDiff(&end, &start); cout << time_elapsed / 1000000000.0 << endl; } and here is the virtual function variant: #include <iostream> #include <cstdlib> #include <ctime> #include <cmath> using namespace std; long long timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p) { return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) - ((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec); } class A { public: virtual void calculate( double *i ) = 0; }; class A1 : public A { public: void calculate( double *i ) { *i = sin(*i); } }; class A2 : public A { public: void calculate( double *i ) { *i = cos(*i); } }; class A3 : public A { public: void calculate( double *i ) { *i = tan(*i); } }; class A4 : public A { public: void calculate( double *i ) { *i = sqrt(*i); } }; int main(void) { srand(time(0)); A *base[100000]; double array[100000]; for ( long int i = 0; i < 100000; ++i ) { array[i] = ( double )( rand() / 1000 ); switch ( rand() % 4 ) { case 0: base[i] = new A1(); break; case 1: base[i] = new A2(); break; case 2: base[i] = new A3(); break; case 3: base[i] = new A4(); break; } } struct timespec start, end; clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start); for ( int i = 0; i < 100000; ++i ) { base[i]->calculate( &array[i] ); } clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end); unsigned long long time_elapsed = timespecDiff(&end, &start); cout << time_elapsed / 1000000000.0 << endl; } My system is LInux, Fedora 13, gcc 4.4.2. The code is compiled it with g++ -O3. The first one is test1, the second is test2. Now I see this in console: [Ignat@localhost circuit_testing]$ ./test2 && ./test2 0.0153142 0.0153166 Well, more or less, I think. And then, this: [Ignat@localhost circuit_testing]$ ./test2 && ./test2 0.01531 0.0152476 Where are the 25% which should be visible? How can the first executable be even slower than the second one? I'm asking this because I'm doing a project which involves calling a lot of small functions in a row like this in order to compute the values of an array, and the code I've inherited does a very complex manipulation to avoid the virtual function call overhead. Now where is this famous call overhead?

    Read the article

  • Combining deflate and minify - am i creating overhead?

    - by Mark Nolan
    I minify my css and js files on the fly with google.codes minify. I have also set my .htaccess to use deflate on all my css and js files - the reason beeing some js files (like shadowbox and tinymce) reference to other js files in the code. So i'm compressing with apache deflate and also minify compresses some js and css files with gzip - am i creating overhead by doing this - first gzipping (minify) and then zlib (deflate) will run through again. Or will apache deflate ignore the already gzipped files having the attributes set by minify in the headers. Anyone have any experiences with this?

    Read the article

  • Apache Cassandra overwhelming bandwidth overhead

    - by tanyehzheng
    while testing Apache Cassandra, I inserted 1000 rows of data. I allow it to propagate to the other machine on LAN. This is a 2 machine cluster. I monitor the network connection between the two machine. The total data I expected to flow between the two servers should be around 25Mb including all column names, column values and timestamps). But the actual data sent and received between them was an whopping 362Mb!! Anybody knows why is there such an overwhelming overhead? Thank you

    Read the article

  • Is there any performance overhead in using RaiseEvent in .net

    - by Sachin
    Is there any performance overhead in using RaiseEvent in .net I have a code which is similar to following. Dim _startTick As Integer = Environment.TickCount 'Do some Task' Dim duration As Integer = Environment.TickCount - _startTick Logger.Debug("Time taken : {0}", duration) RaiseEvent Datareceived() Above code returns Time Taken :1200 Time Taken :1400 But if remove RaiseEvent it returns Time Taken :110 Time Taken :121 I am surprised that the raiseevent is called after the logging of time taken. How it effects total time taken. I am working on Compact framework. Update: In the Eventhandler I had given a MsgBox. When I removed the message box it is now showing time taken as 110,121,etc i.e. less that 500 milliseconds. If I put the Msgbox back in eventhandler it shows 1200,1400,etc i.e. more that a second. More surprised now.(Event is raised after the logging part)

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • Trying to reduce the speed overhead of an almost-but-not-quite-int number class

    - by Fumiyo Eda
    I have implemented a C++ class which behaves very similarly to the standard int type. The difference is that it has an additional concept of "epsilon" which represents some tiny value that is much less than 1, but greater than 0. One way to think of it is as a very wide fixed point number with 32 MSBs (the integer parts), 32 LSBs (the epsilon parts) and a huge sea of zeros in between. The following class works, but introduces a ~2x speed penalty in the overall program. (The program includes code that has nothing to do with this class, so the actual speed penalty of this class is probably much greater than 2x.) I can't paste the code that is using this class, but I can say the following: +, -, +=, <, > and >= are the only heavily used operators. Use of setEpsilon() and getInt() is extremely rare. * is also rare, and does not even need to consider the epsilon values at all. Here is the class: #include <limits> struct int32Uepsilon { typedef int32Uepsilon Self; int32Uepsilon () { _value = 0; _eps = 0; } int32Uepsilon (const int &i) { _value = i; _eps = 0; } void setEpsilon() { _eps = 1; } Self operator+(const Self &rhs) const { Self result = *this; result._value += rhs._value; result._eps += rhs._eps; return result; } Self operator-(const Self &rhs) const { Self result = *this; result._value -= rhs._value; result._eps -= rhs._eps; return result; } Self operator-( ) const { Self result = *this; result._value = -result._value; result._eps = -result._eps; return result; } Self operator*(const Self &rhs) const { return this->getInt() * rhs.getInt(); } // XXX: discards epsilon bool operator<(const Self &rhs) const { return (_value < rhs._value) || (_value == rhs._value && _eps < rhs._eps); } bool operator>(const Self &rhs) const { return (_value > rhs._value) || (_value == rhs._value && _eps > rhs._eps); } bool operator>=(const Self &rhs) const { return (_value >= rhs._value) || (_value == rhs._value && _eps >= rhs._eps); } Self &operator+=(const Self &rhs) { this->_value += rhs._value; this->_eps += rhs._eps; return *this; } Self &operator-=(const Self &rhs) { this->_value -= rhs._value; this->_eps -= rhs._eps; return *this; } int getInt() const { return(_value); } private: int _value; int _eps; }; namespace std { template<> struct numeric_limits<int32Uepsilon> { static const bool is_signed = true; static int max() { return 2147483647; } } }; The code above works, but it is quite slow. Does anyone have any ideas on how to improve performance? There are a few hints/details I can give that might be helpful: 32 bits are definitely insufficient to hold both _value and _eps. In practice, up to 24 ~ 28 bits of _value are used and up to 20 bits of _eps are used. I could not measure a significant performance difference between using int32_t and int64_t, so memory overhead itself is probably not the problem here. Saturating addition/subtraction on _eps would be cool, but isn't really necessary. Note that the signs of _value and _eps are not necessarily the same! This broke my first attempt at speeding this class up. Inline assembly is no problem, so long as it works with GCC on a Core i7 system running Linux!

    Read the article

  • How atomic *should* I make an Ajax form?

    - by b. e. hollenbeck
    I have some web forms that I'm bringing over with AJAX, and as I was dealing with the database on the back end, I thought that it might be easier to just handle each input on the form atomically with AJAX, saving the form in 'real time' as the user edits it. The forms are ~20 fields of administrative settings. Would this create massive overhead with the app, cause it to be error-prone, or is this a feasible idea? Of course, contingent operations (like a checkbox that then requires a text entry) would be held until the textbox gained and lost focus. Comments?

    Read the article

  • Is it a good practice to pass struct object as parameter to a function in c++?

    - by tsubasa
    I tried an example live below: typedef struct point { int x; int y; } point; void cp(point p) { cout<<p.x<<endl; cout<<p.y<<endl; } int main() { point p1; p1.x=1; p1.y=2; cp(p1); } The result thats printed out is: 1 2 which is what I expected. My question is: Does parameter p get the full copy of object p1? If so, I wonder if this is a good practice? (I assumed when the struct gets big in size, this will create a lot of copy overhead).

    Read the article

  • [C#] WebClient construction overhead

    - by Barguast
    I have a client which makes a limited number of concurrent web requests. I use WebClient for this purpose. I currently have a pool of WebClient-s which I create once and use whichever one is idle. This approach is becoming a little cumbersome though, and I'm wondering if there is any benefit to having a collection of pre-constructed WebClient instances, or if creating them on the fly wouldn't be too much trouble?

    Read the article

  • lowest latency, least overhead app server?

    - by Mark Harrison
    I'm designing an application which will have a network interface for feeding out large numbers of very small metadata requests. The application code itself is very fast, basically looking up data cached in memory and sending it to the client. What's the absolute lowest latency I can get for a network application server running on a linux box? This will be an internal app running on gigE with no authentication. Any language/framework considered, with a preference for C, C++, or Python. Likewise for protocol, although HTTP would be nice.

    Read the article

  • How know when my mongoDB database overhead ?

    - by shingara
    I installed a MongoDB database on my server. My server is in 32Bit and I can't change it soon. When you use MongoDB in a 32Bit architecture you have a limit of 2,5Go of data, as mentionned in this MongoDB blog post. The thing is that I have several database. So how can I know if I am close or not to this limit ?

    Read the article

  • Does Google Analytics have peformance overhead?

    - by Mohit Nanda
    To what extent does Google Analytics impact performance? I'm looking for the following: Benchmarks (including response times/pageload times et al) Links or results to similar benchmarks One (possible) method of testing Google Analytics (GA) on your site: Serve ga.js (the Google Analytics JavaScript file) from your own server. Update from Google Daily (test 1) and Weekly (test 2). I would be interested to see how this reduces the communication between the client webserver and the GA server. Has anyone conducted any of these tests? If so, can you provide your results? If not, does anyone have a better method for testing the performance hit (or lack thereof) for using GA?

    Read the article

  • Low overhead Java Web Services container?

    - by trojanfoe
    I want to provide a Java-based Web Service, but I don't require the features of a full-blown J2EE Application Server. I would like it to start as quickly as possible, though that's not a hard requirement. The Web Service will handle multiple connections and require access to an Oracle database so it will at least require a thread pool and database connection pool. I may want to put a JSP interface onto it later to provide an internal maintainence interface. I have looked at Jetty with an Apache CXF stack, but it looks like I'll have to do a fair amount configuration before even coding the web service - Will it be worth it? Will it even work? Should I forget about the complexity and simply go with JBoss/Weblogic/etc and put up with the bloat and extra start-up time?

    Read the article

  • Minimizing Java Thread Context Switching Overhead

    - by binil
    I have a Java application running on Sun 1.6 32-bit VM/Solaris 10 (x86)/Nahelem 8-core(2 threads per core). A specific usecase in the application is to respond to some external message. In my performance test environment, when I prepare and send the response in the same thread that receives the external input, I get about 50 us advantage than when I hand off the message to a separate thread to send the response. I use a ThreadPoolExecutor with a SynchronousQueue to do the handoff. In your experience what is the acceptable delay between scheduling a task to a thread pool and it getting picked up for execution? What ideas had worked for you in the past to try improve this?

    Read the article

  • Why the overhead when allocating objects/arrays in Java?

    - by Gnijuohz
    How many bytes an array occupies in Java? Assume It's a 64bit machine and also assume there are N elements in an array, so all these elements would take up 2*N, 4*N or 8*N bytes for different types of array. And a lecture in Coursera says that it would occupy 2*N+24, 4*N+24 or 8*N+24 bytes for a N element array and the 24 bytes is called overhead, but didn't explain why the overhead is needed. Also objects have overheads, which is 16 bytes. What exactly are these overheads? What are these 24/16 bytes composed of? Also, do these overheads only exist in Java? How about C, C++ and Python?

    Read the article

  • what's the overhead when allocating objects/arrays in Java?

    - by Gnijuohz
    How many bytes an array occupies in Java? Assume It's a 64bit machine and also assume there are N elements in an array, so all these elements would take up 2*N, 4*N or 8*N bytes for different types of array. And a lecture in Coursera says that it would occupy 2*N+24, 4*N+24 or 8*N+24 bytes for a N element array and the 24 byte is called overhead, but didn't explain it. Also objects have overheads, which is 16 bytes. What exactly are these overheads? Also, do these overheads only exist in Java? How about C, C++ and Python?

    Read the article

  • Overhead of TLS/SSL on a TCP socket connection?

    - by TK Kocheran
    Is there any bandwidth overhead on using SSL on a TCP connection? I understand, of course, the processing/memory usage overhead in encrypting and decrypting packets, but as far as bandwidth is concerned, what is the difference, if any? For example, given a XML file which is 64KB, will there be any tangible difference in the transfer size of the file over HTTP vs. HTTPS? (Ignoring mod_deflate and mod_gzip, of course)

    Read the article

  • Is ASP.NET MVC too much overhead for smaller projects?

    - by Alexander Ryan Baggett
    I will be honest I don't really know much about MVC other than the stuff you can read online in 5 minutes. Unfortunately this doesn't really tell me whether its suited to smaller projects or not. I also read this related question and its chosen answer, but the business perspective is not a concern in this case for me as I am the only one making it. The next answer proceeds to say why it is more flexible. Sure, that's great. But my question is again, if its an ideal choice for a small project. For example I would rather use winforms to make a simple mockup of a small desktop program than do it on WPF because of the overhead of custom styling. So I have a project that will essentially have about 6-8 pages that read excel files and user input use that to pull a bit of data from databases and output resulting excel files. I will be the only one working on this project. If I used webforms I would expect it to take no more than 2-3 weeks. Now I am 100% comfortable with webforms. And I know its easy to do a small project in webforms. But I have only heard good things about MVC so I am seriously considering it.

    Read the article

  • Do the benefits of Resin/Quercus outweigh the overhead?

    - by Craige
    Lately, I've been looking more and more into Resin + Quercus as a technology to develop an application of mine. The reason I started looking into it was that this application has high reporting needs, a lot of which cannot (or realistically, should not) be created in real-time. Java would offer a nice backend to queue and generate reports. Also, with Quercus I would be able to develop my data models in Hibernate, and use them "from PHP", thus effectively stretching these models across front and back-end. This same concept would also apply to any front/back-end common business logic, which could be developed in Java libraries. Now, the downside is that whichever front-end (PHP) MVC Framework I choose (my goal was Symfony 2), it is unlikely to work without some heavy modification, if it can work at all. Quercus is a pretty close implementation of PHP, and is supposed to be compatible with PHP5.3, so namespaces and closures SHOULDN'T be a problem, but when I tried to run an existing Symfony 1.4 app, I failed miserably. So, my question to you is, do you think the benefits of Resin + Quercus outweigh the overhead of using a not-so-perfect/stable implementation of PHP? If this were your application, and your goal was and end-product, rather than educational purposes, what would you decide?

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Just one client bound to address and port: does it make a difference broadcast versus unicast in terms of overhead?

    - by chrisapotek
    Scenario: I am implementing failed over for a network node, so my idea is to make the master node listens on a broadcast ip address and port. If the master node fails, another failover node will start listening on this broadcast address (and port) and take over. Question: My concern is that I will be using a broadcast IP address just for a single node: the master. The failover node only binds if the master fails, in other words, almost never. In terms of network/traffic overhead, is it bad to talk to a single node through a broadcast address or the network somehow is smart enough to know that nobody else is listening to this broadcast address and kind of treat it as a unicast in terms of overhead? My concern is that I will be flooding my network with packets from this broadcast address even thought I am just really talking to a single node (the master). But I can't use unicast because the failover node has to be able to pick up the master stream quickly and transparently in case it fails.

    Read the article

  • Will SRS be sufficient enough for the programmers to do their work, without the additional overhead of FS?

    - by SixSickSix
    We always make 2 documents the SRS (Software Requirement Specification) and the FS (Functional Specifications) documents for the coders aka programmers. As I have examined the SRS is more like containing both functional and non-functional requirements as compared to the FS that deals only with the functional requirements. To cut it short will the SRS be sufficient enough for the programmers to do their work? and not make any FS anymore?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >