Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 155/184 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Static Vs Non-Static Method Performance C#

    - by dotnetguts
    Hello All, I have few global methods declared in public class in my asp.net web application. I have habbit of declaring all global methods in public class in following format public static string MethodName(parameters) { } I want to know how it would impact on performance point of view? 1) Which one is Better? Static Method or Non-Static Method? 2) Reason why it is better? Following link shows Non-Static methods are good because, static methods are using locks to be Thread-safe. The always do internally a Monitor.Enter() and Monitor.exit() to ensure Thread-safety. http://bytes.com/topic/c-sharp/answers/231701-static-vs-non-static-function-performance And Following link shows Static Methods are good static methods are normally faster to invoke on the call stack than instance methods. There are several reasons for this in the C# programming language. Instance methods actually use the 'this' instance pointer as the first parameter, so an instance method will always have that overhead. Instance methods are also implemented with the callvirt instruction in the intermediate language, which imposes a slight overhead. Please note that changing your methods to static methods is unlikely to help much on ambitious performance goals, but it can help a tiny bit and possibly lead to further reductions. http://dotnetperls.com/static-method I am little confuse which one to use? Thanks

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Vector [] vs copying

    - by sak
    What is faster and/or generally better? vector<myType> myVec; int i; myType current; for( i = 0; i < 1000000; i ++ ) { current = myVec[ i ]; doSomethingWith( current ); doAlotMoreWith( current ); messAroundWith( current ); checkSomeValuesOf( current ); } or vector<myType> myVec; int i; for( i = 0; i < 1000000; i ++ ) { doSomethingWith( myVec[ i ] ); doAlotMoreWith( myVec[ i ] ); messAroundWith( myVec[ i ] ); checkSomeValuesOf( myVec[ i ] ); } I'm currently using the first solution. There are really millions of calls per second and every single bit comparison/move is performance-problematic.

    Read the article

  • Initial capacity of collection types, i.e. Dictionary, List

    - by Neil N
    Certain collection types in .Net have an optional "Initial Capacity" constructor param. i.e. Dictionary<string, string> something = new Dictionary<string,string>(20); List<string> anything = new List<string>(50); I can't seem to find what the default initial capacity is for these objects on MSDN. If I know I will only be storing 12 or so items in a dictionary, doesn't it make sense to set the initial capacity to something like 20? My reasoning is, assuming the capacity grows like it does for a StringBuiler, which doubles each time the capacity is hit, and each re-allocation is costly, why not pre-set the size to something you know will hold your data, with some extra room just in case? If the initial capacity is 100, and I know I will only need a dozen or so, it seems as though the rest of that allocated RAM is allocated for nothing. Please spare me the "premature optimization" speil for the O(n^n)th time. I know it won't make my apps any faster or save any meaningful amount of memory, this is mostly out of curiosity.

    Read the article

  • java.math.BigInteger pow(exponent) question

    - by Jan Kraus
    Hi, I did some tests on pow(exponent) method. Unfortunately, my math skills are not strong enough to handle the following problem. I'm using this code: BigInteger.valueOf(2).pow(var); Results: var | time in ms 2000000 | 11450 2500000 | 12471 3000000 | 22379 3500000 | 32147 4000000 | 46270 4500000 | 31459 5000000 | 49922 See? 2,500,000 exponent is calculated almost as fast as 2,000,000. 4,500,000 is calculated much faster then 4,000,000. Why is that? To give you some help, here's the original implementation of BigInteger.pow(exponent): public BigInteger pow(int exponent) { if (exponent < 0) throw new ArithmeticException("Negative exponent"); if (signum==0) return (exponent==0 ? ONE : this); // Perform exponentiation using repeated squaring trick int newSign = (signum<0 && (exponent&1)==1 ? -1 : 1); int[] baseToPow2 = this.mag; int[] result = {1}; while (exponent != 0) { if ((exponent & 1)==1) { result = multiplyToLen(result, result.length, baseToPow2, baseToPow2.length, null); result = trustedStripLeadingZeroInts(result); } if ((exponent >>>= 1) != 0) { baseToPow2 = squareToLen(baseToPow2, baseToPow2.length, null); baseToPow2 = trustedStripLeadingZeroInts(baseToPow2); } } return new BigInteger(result, newSign); }

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • R: Are there any alternatives to loops for subsetting from an optimization standpoint?

    - by Adam
    A recurring analysis paradigm I encounter in my research is the need to subset based on all different group id values, performing statistical analysis on each group in turn, and putting the results in an output matrix for further processing/summarizing. How I typically do this in R is something like the following: data.mat <- read.csv("...") groupids <- unique(data.mat$ID) #Assume there are then 100 unique groups results <- matrix(rep("NA",300),ncol=3,nrow=100) for(i in 1:100) { tempmat <- subset(data.mat,ID==groupids[i]) #Run various stats on tempmat (correlations, regressions, etc), checking to #make sure this specific group doesn't have NAs in the variables I'm using #and assign results to x, y, and z, for example. results[i,1] <- x results[i,2] <- y results[i,3] <- z } This ends up working for me, but depending on the size of the data and the number of groups I'm working with, this can take up to three days. Besides branching out into parallel processing, is there any "trick" for making something like this run faster? For instance, converting the loops into something else (something like an apply with a function containing the stats I want to run inside the loop), or eliminating the need to actually assign the subset of data to a variable?

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • Partial class or "chained inheritance"

    - by Charlie boy
    Hi From my understanding partial classes are a bit frowned upon by professional developers, but I've come over a bit of an issue; I have made an implementation of the RichTextBox control that uses user32.dll calls for faster editing of large texts. That results in quite a bit of code. Then I added spellchecking capabilities to the control, this was made in another class inheriting RichTextBox control as well. That also makes up a bit of code. These two functionalities are quite separate but I would like them to be merged so that I can drop one control on my form that has both fast editing capabilities and spellchecking built in. I feel that simply adding the code form one class to the other would result in a too large code file, especially since there are two very distinct areas of functionality, so I seem to need another approach. Now to my question; To merge these two classes should I make the spellchecking RichTextBox inherit from the fast edit one, that in turn inherits RichTextBox? Or should I make the two classes partials of a single class and thus making them more “equal” so to speak? This is more of a question of OO principles and exercise on my part than me trying to reinvent the wheel, I know there are plenty of good text editing controls out there. But this is just a hobby for me and I just want to know how this kind of solution would be managed by a professional. Thanks!

    Read the article

  • How can I accelerate the generation of the an MD5 Checksum within vb.net?

    - by Richard
    I'm working with some very large files residing on P2 (Panasonic) cards. Part of the process we employ is to first generate a checksum of the file we are going to copy, then copy the file, then run a checksum on the file to confirm that it copied OK. The problem is, is that files are large (70 GB+) and take a long time to complete. It's an issue since we will eventually be dealing with thousands of these files. I would like to find a faster way to generate the checksum other than using the System.Security.Cryptography.MD5CryptoServiceProvider I don't care if this means using a specialized hardware card, provided it works and is not to ungodly expensive. I would prefer to have a method of encoding that provided some feedback as to how far the process has gone along so I can display it like I do now. The application is written in vb.net. I would prefer to be able to use it as component, library, reference within my application, but I'm willing to call an outside application if there is enough improvement in the speed of generating the checksum. Needless to say, the checksum must be consistent and correct. :-) Thank you in advance for your time and efforts, Richard

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • Any utility to test expand C/C++ #define macros?

    - by Randy
    It seems I often spend way too much time trying to get a #define macro to do exactly what i want. I'll post my current delemia below and any help is appreciated. But really the bigger question is whether there is any utility someone could reccomend, to quickly display what a macro is actually doing? It seems like even the slow trial and error process would go much faster if I could see what is wrong. Currently, I'm dynamically loading a long list of functions from a DLL I made. The way I've set things up, the function pointers have the same nanes as the exported functions, and the typedef(s) used to prototyp them have the same names, but with a prepended underscor. So I want to use a define to simplfy assignments of a long long list of function pointers. For example, In the code statement below, 'hexdump' is the name of a typdef'd function point, and is also the name of the function, while _hexdump is the name of the typedef. If GetProcAddress() fails, a failure counter in incremented. if (!(hexdump = (_hexdump)GetProcAddress(h, "hexdump"))) --iFail; So lets say I'd like to rplace each line like the above with a macro, like this... GETADDR_FOR(hexdump ) Well this is the best I've come up with so far. It doesn't work (my // comment is just to prevent text formatting in the message)... // #define GETADDR_FOR(a) if (!(a = (#_#a)GetProcAddress(h, "/""#a"/""))) --iFail; And again, while I'd APPRECIATE an insight into what silly mistake I've made, it would make my day to have a utility that would show me the error of my ways, by simply plugging in my macro

    Read the article

  • Touch draw in Quatz 2D/Core Graphics

    - by OgreSwamp
    Hello, I'm trying to implement "hand draw tool". At the moment algorythm looks like that (I don't insert any code because methods are quite big, will try to explain an idea): Drawing In touchesStarted: method I create NSMutableArray *pointsArray and add point into it. Call setNeedsDisplay: method. In touchesMoved: method I calculate points between last added point from the pointsArray and current point. Add all points to the pointsArray. Call setNeedsDisplay: method. In touchesFinished: event I calculate points between last added point from the array and current point. Set flag touchesWereFinished. Call setNeedsDisplay:. Render: drawRect: method checks is pointsArray != nil and is there any data in it. If there is - it starts to traw circles in each point of this array. If flag touchesWereFinished is set - save current context to the UIImage, release pointsArray, set it to nil and reset the flag. There are a lot disadvantages of this method: It is slow It becomes extremely slow when user touches and move finger for long time. Array becomes enormous "Lines" composed by circles are ugly I would like to change my algorithm to make it bit faster and line smoother. In result I would like to have lines like on the picture at following URL (sorry, not enough reputation to insert an image): http://2.bp.blogspot.com/_r5VzEAUYXJ4/SrOYp8tJCPI/AAAAAAAAAMw/ZwDKXiHlhV0/s320/SketchBook+Mobile(4).png Can you advice me, ho I can draw lines this way (smooth and slim on the edges)? I thought to draw circles with alpha gradient on the edges (to make lines smoother), but it will be extremely slowly IMHO. Thanks for help

    Read the article

  • How would I instruct extconf.rb to use additional g++ optimization flags, and which are advisable?

    - by mohawkjohn
    I'm using Rice to write a C++ extension for a Ruby gem. The extension is in the form of a shared object (.so) file. This requires 'mkmf-rice' instead of 'mkmf', but the two (AFAIK) are pretty similar. By default, the compiler uses the flags -g -O2. Personally, I find this kind of silly, since it's hard to debug with any optimization enabled. I've resorted to editing the Makefile to take out the flags I don't like (e.g., removing -fPIC -shared when I need to debug using main() instead of Ruby's hooks). But I figure there's got to be a better way. I know I can just do $CPPFLAGS += " -DRICE" to add additional flags. But how do I remove things without editing the Makefile directly? A secondary question: what optimizations are safe for shared objects loaded by Ruby? Can I do things like -funroll-loops? What do you all recommend? It's a scientific computing project, so the faster the better. Memory is not much of an issue. Many thanks!

    Read the article

  • Java queue and multi-dimention array question? [Beginner level]

    - by javaLearner.java
    First of all, this is my code (just started learning java): Queue<String> qe = new LinkedList<String>(); qe.add("b"); qe.add("a"); qe.add("c"); qe.add("d"); qe.add("e"); My question: Is it possible to add element to the queue with two values, like: qe.add("a","1"); // where 1 is integer So, that I know element "a" have value 1. If I want to add a number let say "2" to element a, I will have like a = 3. If this cant be done, what else in java classes that can handle this? I tried to use multi-dimention array, but its kinda hard to do the queue, like pop, push etc. (Maybe I am wrong) How to call specific element in the queue? Like, call element a, to check its value. [Note] Please don't give me links that ask me to read java docs. I was reading, and I still dont get it. The reason why I ask here is because, I know I can find the answer faster and easier.

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • How to hold a queue of messages and have a group of working threads without polling?

    - by Mark
    I have a workflow that I want to looks something like this: / Worker 1 \ =Request Channel= - [Holding Queue|||] - Worker 2 - =Response Channel= \ Worker 3 / That is: Requests come in and they enter a FIFO queue Identical workers then pick up tasks from the queue At any given time any worker may work only one task When a worker is free and the holding queue is non-empty the worker should immediately pick up another task When tasks are complete, a worker places the result on the Response Channel I know there are QueueChannels in Spring Integration, but these channels require polling (which seems suboptimal). In particular, if a worker can be busy, I'd like the worker to be busy. Also, I've considered avoiding the queue altogether and simply letting tasks round-robin to all workers, but it's preferable to have a single waiting line as some tasks may be accomplished faster than others. Furthermore, I'd like insight into how many jobs are remaining (which I can get from the queue) and the ability to cancel all or particular jobs. How can I implement this message queuing/work distribution pattern while avoiding a polling? Edit: It appears I'm looking for the Message Dispatcher pattern -- how can I implement this using Spring/Spring Integration?

    Read the article

  • Need help tuning a SQL statement

    - by jeffself
    I've got a table that has two fields (custno and custno2) that need to be searched from a query. I didn't design this table, so don't scream at me. :-) I need to find all records where either the custno or custno2 matches the value returned from a query on the same table based on a titleno. In other words, the user types in 1234 for the titleno. My query searches the table to find the custno associated with the titleno. It also looks for the custno2 for that titleno. Then it needs to do a search on the same table for all other records that have either the custno or custno2 returned in the previous search in the custno or custno2 fields for those other records. Here is what I've come up with: SELECT BILLYR, BILLNO, TITLENO, VINID, TAXPAID, DUEDATE, DATEPIF, PROPDESC FROM TRCDBA.BILLSPAID WHERE CUSTNO IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') OR CUSTNO2 IN (select custno from trcdba.billspaid where titleno = '1234' union select custno2 from trcdba.billspaid where titleno = '1234' and custno2 != '') The query takes about 5-10 seconds to return data. Can it be rewritten to work faster?

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Uniquing with Existing Core Data Entities

    - by warrenm
    I'm using Core Data to store a lot (1000s) of items. A pair of properties on each item are used to determine uniqueness, so when a new item comes in, I compare it against the existing items before inserting it. Since the incoming data is in the form of an RSS feed, there are often many duplicates, and the cost of the uniquing step is O(N^2), which has become significant. Right now, I create a set of existing items before iterating over the list of (possible) new items. My theory is that on the first iteration, all the items will be faulted in, and assuming we aren't pressed for memory, most of those items will remain resident over the course of the iteration. I see my options thusly: Use string comparison for uniquing, iterating over all "new" items and comparing to all existing items (Current approach) Use a predicate to filter the set of existing items against the properties of the "new" items. Use a predicate with Core Data to determine uniqueness of each "new" item (without retrieving the set of existing items). Is option 3 likely to be faster than my current approach? Do you know of a better way?

    Read the article

  • A more effecient millisecond conversion method?

    - by cube
    I am currently using this method to convert milliseconds to min:sec:1/10sec. However it does not seem to be efficient at all. Would anyone know of a faster more efficient and optimized way of accomplishing the same. mills.prototype.formatTime = function(time) { var elapsedTime = (time * 1000); //Minutes var elapsedM = (elapsedTime/60000)|0; var remaining = elapsedTime - (elapsedM * 60000); //add a leading zero if it's a single digit number if (elapsedM < 10) { elapsedM = "0"+elapsedM; } //Seconds var elapsedS = ((remaining/1000)|0); remaining -= (elapsedS*1000); ////add leading zero if (elapsedS<10) { elapsedS = "0"+elapsedS; } //Hundredths var elapsedFractions = ((remaining/10)|0); if (elapsedFractions < 10) { elapsedFractions = "0"+elapsedFractions; } //display results nicely var time_data = elapsedM+":"+elapsedS+":"+elapsedFractions; //return time_data; return[time_data,elapsedM,elapsedS,elapsedFractions] };

    Read the article

  • STL vector performance

    - by iAdam
    STL vector class stores a copy of the object using copy constructor each time I call push_back. Wouldn't it slow down the program? I can have a custom linkedlist kind of class which deals with pointers to objects. Though it would not have some benefits of STL but still should be faster. See this code below: #include <vector> #include <iostream> #include <cstring> using namespace std; class myclass { public: char* text; myclass(const char* val) { text = new char[10]; strcpy(text, val); } myclass(const myclass& v) { cout << "copy\n"; //copy data } }; int main() { vector<myclass> list; myclass m1("first"); myclass m2("second"); cout << "adding first..."; list.push_back(m1); cout << "adding second..."; list.push_back(m2); cout << "returning..."; myclass& ret1 = list.at(0); cout << ret1.text << endl; return 0; } its output comes out as: adding first...copy adding second...copy copy The output shows the copy constructor is called both times when adding and when retrieving the value even then. Does it have any effect on performance esp when we have larger objects?

    Read the article

  • Protecting my apps security from deassembling

    - by sandis
    So I recently tested deassembling one of my android apps, and to my horror I discovered that the code was quite readable. Even worse, all my variable names where intact! I thought that those would be compressed to something unreadable at compile time. The app is triggered to expire after a certain time. However, now it was trivial for me to find my function named checkIfExpired() and find the variable "expired". Is there any good way of making it harder for a potential hacker messing with my app? Before someone states the obvious: Yes, it is security through obscurity. But obviously this is my only option since the user always will have access to all my code. This is the same for all apps. The details of my deactivation-thingy is unimportant, the point is that I dont want deassembler to understand some of the things I do. side questions: Why are the variable names not compressed? Could it be the case that my program would run faster if I stopped using really long variable names, as are my habit?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >