Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 91/151 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Why Switch/Case and not If/Else If?

    - by OB OB
    This question in mainly pointed at C/C++, but I guess other languages are relevant as well. I can't understand why is switch/case still being used instead of if/else if. It seems to me much like using goto's, and results in the same sort of messy code, while the same results could be acheived with if/else if's in a much more organized manner. Still, I see these blocks around quite often. A common place to find them is near a message-loop (WndProc...), whereas these are among the places when they raise the heaviest havoc: variables are shared along the entire block, even when not propriate (and can't be initialized inside it). Extra attention has to be put on not dropping break's, and so on... Personally, I avoid using them, and I wonder wether I'm missing something? Are they more efficient than if/else's? Are they carried on by tradition?

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • Jquery load() help

    - by mtwallet
    Hi. I am creating a portfolio page for m personal site. I have a slider with approx 20 anchors that link to projects I have worked on, each one contains a client logo that when clicked should load some html content then fade that content into a container div on the same page. I have been advised to use the JQuery method load() which seems straight forward. The question I have is do I have to repeat the following code for each of the 20 anchors as the url is different for each one or is there a more efficient way? $('a#project1').click(function() { $('#work').load('ajax/project1.html'); } Also would I have to use the unload() method first to ensure the div I am loading into is empty? Many thanks in advance.

    Read the article

  • git-diff in another directory

    - by ABach
    I'm currently writing a little zsh function that checks all of my git repositories to see if they're dirty or not and then prints out the ones that need a commit. Thus far, I've figured out that the quickest way to figure out a git repository's clean/dirty status is via git-diff and git-ls-files: if ! git diff --quiet || git ls-files --others --exclude-standard; then state=":dirty" fi I have two questions for you folks: Does anyone know of a quicker, more efficient way to check for file changes/additions in a git repo? I want my zsh function to be handed a file path (say ~/Code/git-repos/) and check all of the repositories in it. Is there a way to do without having to cd into each directory and run those commands? Something like git-diff --quiet --git-dir="~/Code/git-repos/..." would be fantastic. Thanks! :)

    Read the article

  • Agile Approach for WCM

    - by cameron.f.logan
    Can anyone provide me with advice, opinions, or experience with using an agile methodology to delivery an enterprise-scale Web Content Management system (e.g., Interwoven TeamSite, Tridion)? My current opinion is that to implement a CM system there is a certain--relatively high--amount of upfront work that needs to happen to make sure the system is going to be scalable and efficient for future projects for the multi-year lifespan an WCM is expected to have. This suggests a hybrid approach at best, if not a more waterfall-like approach. I'm really interested to learn what approaches others have taken. Thanks.

    Read the article

  • C# Getting Just Date From Timestamp

    - by Soo
    If I have a timestamp in the form: yyyy-mm-dd hh:mm:ss:mmm How can I just extract the date from the timestamp? For instance, if a timestamp reads: "2010-05-18 08:36:52:236" what is the best way to just get 2010-05-18 from it. What I'm trying to do is isolate the date portion of the timestamp, define a custom time for it to create a new time stamp. Is there a more efficient way to define the time of the timestamp without first taking out the date, and then adding a new time?

    Read the article

  • localtime_r supposed to be thread safe, but causing errors in Valgrind DRD

    - by Nik
    I searched google as much as I could but I couldn't find any good answers to this. localtime_r is supposed to be a thread-safe function for getting the system time. However, when checking my application with Valgrind --tool=drd, it consistantly tells me that there is a data race condition on this function. Are the common search results lying to me, or am I just missing something? It doesn't seem efficient to surround each localtime_r call with a mutex, especially if it is supposed to by thread safe in the first place. here is how i'm using it: timeval handlerTime; gettimeofday(&handlerTime,NULL); tm handlerTm; localtime_r(&handlerTime.tv_sec,&handlerTm); Any ideas?

    Read the article

  • Update tableview instantly as data pushed in core data iphone

    - by user336685
    I need to update the tableview as soon as the content is pushed in core data database. for this AppDelegate.m contains following code NSManagedObjectContext *moc = [self managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"FeedItem" inManagedObjectContext:moc]]; //for loop // push data in code data & then save context [moc save:&error]; ZAssert(error == nil, @"Error saving context: %@", [error localizedDescription]); //for loop ends This code triggers following code from RootviewController.m - (void)controllerWillChangeContent:(NSFetchedResultsController*)controller { [[self tableView] beginUpdates]; } But this updates the tableview only at the end of the for loop ,the table does not get updated after immediate push in db. I tried following code but that didn't work - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { // In the simplest, most efficient, case, reload the table view. [self.tableView reloadData]; } I have been stuck with this problem for several days.Please help.Thanks in advance for solution.

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • Sequence Generators in T-SQL

    - by PaoloFCantoni
    We have an Oracle application that uses a standard pattern to populate surrogate keys. We have a series of extrinsic rows (that have specific values for the surrogate keys) and other rows that have intrinsic values. We use the following Oracle trigger snippet to determine what to do with the Surrogate key on insert: 'IF :NEW.SurrogateKey IS NULL THEN SELECT SurrogateKey_SEQ.NEXTVAL INTO :NEW.SurrogateKey FROM DUAL; END IF;' If the supplied surrogate key is null then get a value from the nominated sequence, else pass the supplied surrogate key through to the row. I can't seem to find an easy way to do this is T-SQL. There are all sorts of approaches, but none of which use the notion of a sequence generator like Oracle and other SQL-92 compliant DBs do. Anybody know of a really efficient way to do this in SQL Server T-SQL? BTW we're using SQL Server 2008 if that's any help. TIA, Paolo

    Read the article

  • How to improve performance

    - by Ram
    Hi, In one of mine applications I am dealing with graphics objects. I am using open source GPC library to clip/merge two shapes. To improve accuracy I am sampling (adding multiple points between two edges) existing shapes. But before displaying back the merged shape I need to remove all the points between two edges. But I am not able to find an efficient algorithm that will remove all points between two edges which has same slope with minimum CPU utilization. Currently all points are of type PointF Any pointer on this will be a great help.

    Read the article

  • Make xargs execute the command once for each line of input

    - by Readonly
    How can I make xargs execute the command exactly once for each line of input given? It's default behavior is to chunk the lines and execute the command once, passing multiple lines to each instance. From http://en.wikipedia.org/wiki/Xargs: find /path -type f -print0 | xargs -0 rm In this example, find feeds the input of xargs with a long list of file names. xargs then splits this list into sublists and calls rm once for every sublist. This is more efficient than this functionally equivalent version: find /path -type f -exec rm '{}' \; I know that find has the "exec" flag. I am just quoting an illustrative example from another resource.

    Read the article

  • Power Analysis in [R] for Two-Way Anova

    - by Thomas
    I am trying to calculate the necessary sample size for a 2x2 factorial design. I have two questions. 1) I am using the package pwr and the one way anova function to calculate the necessary sample size using the following code pwr.anova.test(k = , n = , f = , sig.level = , power = ) However, I would like to look at two way anova, since this is more efficient at estimating group means than one way anova. There is no two-way anova function that I could find. Is there a package or routine in [R] to do this? 2) Moreover, am I safe in assuming that since I am using a one-way anova power calculations, that the sample size will be more conservative (i.e. larger)?

    Read the article

  • Minimizing SQL queries using join with one-to-many relationship

    - by Brian
    So let me preface this by saying that I'm not an SQL wizard by any means. What I want to do is simple as a concept, but has presented me with a small challenge when trying to minimize the amount of database queries I'm performing. Let's say I have a table of departments. Within each department is a list of employees. What is the most efficient way of listing all the departments and which employees are in each department. So for example if I have a department table with: id name 1 sales 2 marketing And a people table with: id department_id name 1 1 Tom 2 1 Bill 3 2 Jessica 4 1 Rachel 5 2 John What is the best way list all departments and all employees for each department like so: Sales Tom Bill Rachel Marketing Jessica John Pretend both tables are actually massive. (I want to avoid getting a list of departments, and then looping through the result and doing an individual query for each department). Think similarly of selecting the statuses/comments in a Facebook-like system, when statuses and comments are stored in separate tables.

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Java multi-threading - what is the best way to monitor the activity of a number of threads?

    - by MalcomTucker
    I have a number of threads that are performing a long runing task. These threads themselves have child threads that do further subdivisions of work. What is the best way for me to track the following: How many total threads my process has created What the state of each thread currently is What part of my process each thread has currently got to I want to do it in as efficient a way as possible and once threads finish, I don't want any references to them hanging around becasuse I need to be freeing up memory as early as possible. Any advice?

    Read the article

  • How to do a range query

    - by Walter H
    I have a bunch of numbers timestamps that I want to check against a range to see if they match a particular range of dates. Basically like a BETWEEN .. AND .. match in SQL. The obvious data structure would be a B-tree, but while there are a number of B-tree implementations on CPAN, they only seem to implement exact matching. Berkeley DB has the same problem; there are B-tree indices, but no range matching. What would be the simplest way to do this? I don't want to use an SQL database unless I have to. Clarification: I have a lot of these, so I'm looking for an efficient method, not just grep over an array.

    Read the article

  • Syncronizing mobile phone contacts with contacts from social networks

    - by Pentium10
    I retrieve a JSON list of contacts from a social network site. It contains firstname, lastname, displayname, data1, data2, etc... What is the efficient way to quickly lookup my local phone contacts database and "match" them based on their name. Since there are firstname, lastname and displayname this can vary. What do you think, how can the best match be achieved? Also how do I make sure I don't parse for each JSON item the whole database I want to avoide having JSON_COUNT x MOBILE COUNT steps.

    Read the article

  • Keeping iPhone application in sync with GWT application.

    - by Reflog
    Hi, I'm working on an iPhone application that should work in offline and online modes. In it's online mode it's supposed to feed all the information the user enters to a webservice backed by GWT/GAE. In it's offline mode it's supposed to store the information locally, and when connection is available sync it up to the web service. Currently my plan is as follows: Provide a connection between an app and a webservice using Protobuffers for efficient over-the-wire communication Work with local DB using Core Data Poll the network status, and when available sync the database and keep some sort of local-db-to-remote-db key synchronization. The question is - am I in the right direction? Are the standard patterns for implementing this? Maybe someone can point me to an open-source application that works in a similar fashion? I am really new to iPhone coding, and would be very glad to hear any suggestions. Thanks

    Read the article

  • Efficiency of Java code with primitive types

    - by super89
    Hello! I want to ask which piece of code is more efficient in Java? Code 1: void f() { for(int i = 0 ; i < 99999;i++) { for(int j = 0 ; j < 99999;j++) { //Some operations } } } Code 2: void f() { int i,j; for(i = 0 ; i < 99999;i++) { for(j = 0 ; j < 99999;j++) { //Some operations } } } My teacher said that second is better, but I can't agree that opinion.

    Read the article

  • Is there a SortedList<T> class in .net ? (not SortedList<Key,Value> which is actually a kind of Sort

    - by Brann
    I need to sort some objects according to their contents (in fact according to one of their properties, which is NOT the key and may be duplicated between different objects) .net provides two classes (SortedDictionnary and SortedList), and both are some kind of Dictionaries. The only difference (AFAIK) is that SortedDictionnary uses a binary tree to maintain its state whereas SortedList does not and is accessible via an index. I could achieve what I want using a List, and then using its Sort() method with a custom implementation of IComparer, but it wouldn't be time-efficient as I would sort the whole List each time I insert a new object, whereas a good SortedList would just insert the item at the right position. What I need is a SortedList class with a RefreshPosition(int index) to move only the changed (or inserted) object rather than resorting the whole list each time an object inside changes. Am I missing something obvious ?

    Read the article

  • Task management algorithm in C#

    - by silverwizz
    Hi guys, i am looking for efficient task management fo C# what i mean by task management is executing pre-defined interval time of task. Example: task a needs to be run every 1 mins task b needs to be run every 3 mins task c needs to be run every 5 mins these tasks can be added and removed in arbitary time... And the task that i mentioned can be 100000 or more... The task will bw executed forever until it is removed... Do u guys familiar with this kind of algorithm? I am thinking to implement in either c# or php.... Thanks

    Read the article

  • Perform Grouping of Resultset in Code

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it?

    Read the article

  • Code understanding, reverse engineering, best concepts and tools. Java.

    - by core07
    One of most demanding tasks for any programmer, architect is understanding other's code. E.g. I am contractor, hired to rescue some project very quickly. Fix bugs, plan global refactoring and therefore I need most efficient way to understand the code. What is the list of concepts, their priority and best tools for this? Of what I know: reverse code engineering to create object models (creating of diagram per package is not so convenient), create sequence diagrams (the tool connects in debug mode to the system and generates diagrams from runtime). Some visualizing techniques, using some tools to work not just with .java but also with e.g. JPA implementors like Hibernate. Generate diagram for not all the codebase, but add some class and then classes used by it. Is Sparx Enterprise Architect state of the art in reverse engineering or far from that. Any other better tools? Ideally would be that tool makes me understand the code as if I wrote it myself :)

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >