Search Results

Search found 7107 results on 285 pages for 'processing efficiency'.

Page 32/285 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Cache efficient code

    - by goldenmean
    This could sound a subjective question, but what i am looking for is specific instances which you would have encountered related to this. 1) How to make a code, cache effective-cache friendly? (More cache hits, as less cahce misses as possible). from both perspectives, data cache & program cache(instruction cache). i.e. What all things in one's code, related to data structures, code constructs one should take care of to make it cache effective. 2) Are there any particular data structures one must use, must avoid,or particular way of accessing the memers of that structure etc.. to make code cache effective. 3) Are there any program constructs(if, for, switch, break, goto,...), code-flow(for inside a if, if inside a for, etc...) one should follow/avoid in this matter? I am looking forward to hear individual experiences related to making a cache efficient code in general. It can be any programming language(C,C++,ASsembly,...), any hardware target(ARM,Intel,PowerPC,...), any OS(Windows,Linux,Symbian,...) etc.. More the variety, it will help better to understand it deeply.

    Read the article

  • MVC and repository pattern data effeciency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • Should I Make These Vectors Classes or Structs in C#

    - by dewald
    I am creating a geometry library in C# and I will need the following immutable types: Vector2f (2 floats - 8 bytes) Vector2d (2 doubles - 16 bytes) Vector3f (3 floats - 12 bytes) Vector3d (3 doubles - 24 bytes) Vector4f (4 floats - 16 bytes) Vector4d (4 doubles - 32 bytes) I am trying to determine whether to make them structs or classes. MSDN suggests only using a struct if the size if going to be no greater than 16 bytes. That reference seems to be from 2005. Is 16 bytes still the max suggested size? I am sure that using structs for the float vectors would be more efficient than using a class, but what should I do about the double vectors? Should I make them structs also to be consistent, or should I make them classes?

    Read the article

  • How to find the right balance between "quick & dirty" and "nice & general" code?

    - by Frank
    This is not a direct programming question, but a little help from the programming community would be appreciated. I am suffering from an overgeneralization disease. I can't stop spending valuable time with making my code most general and abstract. I could also call it the toolkit/library disease. I tend to turn every programming task into a general problem and try to "write a toolkit", that would work for many similar problems. I know it's a good thing in general, if there is enough time, but sometimes I should be writing a quick prototype and just can't seem to write the quick and dirty code that just works for the special case. I often get excited about an idea that makes the code more general and user-configurable and understimate the time it takes to actually implement it that way. Does anyone else have this experience? How can I force myself to find the right balance between "quick hack" and "nice solution"?

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • Remove an element from an xml File using jdom

    - by Llistes Sugra
    I have a 300 KB xml file with 70 elements in it. I need to be efficient upon removing one of the root's elements. What is the best approach? Should I detach the element in memory, save it and overwrite it by moving it? Is there a better option? I like org.jdom but any improvement is welcome

    Read the article

  • Doxygen, too heavy to maintain ?

    - by Phong
    I am currently starting using doxygen to document my source code. I have notice that the syntax is very heavy, every time I modify the source code, I also need to change the comment and I really have the impression to pass too much time modifying the comment for every change I make in the source code. Do you have some tips to document my source code efficiently ? Does some editor (or plugin for existing editor) for doxygen to do the following exist? automatically track unsynchronized code/comment and warn the programmer about it. automatically add doxygen comment format (template with parameter name in it for example) in the source code (template) for every new item PS: I am working on a C/C++ project.

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

  • How to keep only duplicates efficiently?

    - by Marc Eaddy
    Given an STL vector, I'd like an algorithm that outputs only the duplicates in sorted order, e.g., INPUT : { 4, 4, 1, 2, 3, 2, 3 } OUTPUT: { 2, 3, 4 } The algorithm is trivial, but the goal is to make it as efficient as std::unique(). My naive implementation modifies the container in-place: My naive implementation: void keep_duplicates(vector<int>* pv) { // Sort (in-place) so we can find duplicates in linear time sort(pv->begin(), pv->end()); vector<int>::iterator it_start = pv->begin(); while (it_start != pv->end()) { size_t nKeep = 0; // Find the next different element vector<int>::iterator it_stop = it_start + 1; while (it_stop != pv->end() && *it_start == *it_stop) { nKeep = 1; // This gets set redundantly ++it_stop; } // If the element is a duplicate, keep only the first one (nKeep=1). // Otherwise, the element is not duplicated so erase it (nKeep=0). it_start = pv->erase(it_start + nKeep, it_stop); } } If you can make this more efficient, elegant, or general, please let me know. For example, a custom sorting algorithm, or copy elements in the 2nd loop to eliminate the erase() call.

    Read the article

  • Open Source Utilization Questions: How do you lone wold programmers best take advantage of open sour

    - by Funkyeah
    For Clarity: So you come up with an idea for a new program and want to start hacking, but you also happen to be a one-man army. How do you programming dynamos best find and utilize existing open-source software to give you the highest jumping off point possible when diving into your new project? When you do jump in where the shit do you start from? Any imaginary scenarios would be welcome, e.g. a shitty example might be utilizing a open-source database with an open-source IM client as a starting off point to a make a new client where you could tag and store conversations and query those tags at a later time.

    Read the article

  • What is considered a long execution time?

    - by stjowa
    I am trying to figure out just how "efficient" my server-side code is. Using start and end microtime(true) values, I am able to calculate the time it took my script to run. I am getting times from .3 - .5 seconds. These scripts do a number of database queries to return different values to the user. What is considered an efficient execution time for PHP scripts that will be run online for a website? Note: I know it depends on exactly what is being done, but just consider this a standard script that reads from a database and returns values to the user. Also, I look at Google and see them search the internet in .15 seconds and I feel like my script is crap. Thanks.

    Read the article

  • What Simple Changes Made the Biggest Improvements to Your Delphi Programs

    - by lkessler
    I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory. What small simple changes have you made to your Delphi code that had the biggest impact on the performance of you program by noticeably reducing execution time or memory use? Thanks everyone for all your answers. Many great tips. For completeness, I'll post a few important articles on Delphi optimization that I found. Before you start optimizing Delphi code at About.com Speed and Size: Top 10 Tricks also at About.com Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.

    Read the article

  • C# Sorting Question

    - by betamoo
    I wonder what is the best C# data structure I should use to sort efficiently? Is it List or Array or what? And why the standard array [] does not implement sort method in it? Thanks

    Read the article

  • More efficient way of updating UI from Service than intents?

    - by Donal Rafferty
    I currently have a Service in Android that is a sample VOIP client so it listens out for SIP messages and if it recieves one it starts up an Activity screen with UI components. Then the following SIP messages determine what the Activity is to display on the screen. For example if its an incoming call it will display Answer or Reject or an outgoing call it will show a dialling screen. At the minute I use Intents to let the Activity know what state it should display. An example is as follows: Intent i = new Intent(); i.setAction(SIPEngine.SIP_TRYING_INTENT); i.putExtra("com.net.INCOMING", true); sendBroadcast(i); Intent x = new Intent(); x.setAction(CallManager.SIP_INCOMING_CALL_INTENT); sendBroadcast(x); Log.d("INTENT SENT", "INTENT SENT INCOMING CALL AFTER PROCESSINVITE"); So the activity will have a broadcast reciever registered for these intents and will switch its state according to the last intent it received. Sample code as follows: SipCallListener = new BroadcastReceiver(){ @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if(SIPEngine.SIP_RINGING_INTENT.equals(action)){ Log.d("cda ", "Got RINGING action SIPENGINE"); ringingSetup(); } if(CallManager.SIP_INCOMING_CALL_INTENT.equals(action)){ Log.d("cda ", "Got PHONE RINGING action"); incomingCallSetup(); } } }; IntentFilter filter = new IntentFilter(CallManager.SIP_INCOMING_CALL_INTENT); filter.addAction(CallManager.SIP_RINGING_CALL_INTENT); registerReceiver(SipCallListener, filter); This works however it seems like it is not very efficient, the Intents will get broadcast system wide and Intents having to fire for different states seems like it could become inefficient the more I have to include as well as adding complexity. So I was wondering if there is a different more efficient and cleaner way to do this? Is there a way to keep Intents broadcasting only inside an application? Would callbacks be a better idea? If so why and in what way should they be implemented?

    Read the article

  • Time Saving Minutiae

    - by Dave Jarvis
    What little tricks do you use to save time when developing? For example: In JDeveloper, change the default editor to Source view from Design view for JSP files.   Tools » Preferences » File Types » Default Editors » JSP Tag File By default, JDeveloper opens JSP files in Design view, which can take 10 to 15 seconds to begin editing a file. Opening files in Source view only takes 2 seconds.

    Read the article

  • Why is Perl commonly used for writing CGI scripts?

    - by Michael Vasquez
    I plan to add a better search feature to my site, so I thought that I would write it in C and use the CGI as a means to access it. But it seems that Perl is the most popular language when it comes to CGI-based stuff. Why is that? Wouldn't it be faster programmed in C or machine code? What advantages, if any, are there to writing it in a scripting language? Thanks.

    Read the article

  • Efficient cron job utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • In Perl, is a while loop generally faster than a for loop?

    - by Mike
    I've done a small experiment as will be shown below and it looks like that a while loop is faster than a for loop in Perl. But since the experiment was rather crude, and the subject might be a lot more complicated than it seems, I'd like to hear what you have to say about this. Thanks as always for any comments/suggestions :) In the following two small scripts, I've tried while and for loops separately to calcaulte the factorial of 100,000. The one that has the while loop took 57 minutes 17 seconds to finish while the for loop equivalent took 1 hour 7 minutes 54 seconds. Script that has while loop: use strict; use warnings; use bigint; my $now = time; my $n =shift; my $s=1; while(1){ $s *=$n; $n--; last if $n==2; } print $s*$n; $now = time - $now; printf("\n\nTotal running time: %02d:%02d:%02d\n\n", int($now / 3600), int(($now % 3600) / 60), int($now % 60)); Script that has for loop: use strict; use warnings; use bigint; my $now = time; my $n =shift; my $s=1; for (my $i=2; $i<=$n;$i++) { $s = $s*$i; } print $s; $now = time - $now; printf("\n\nTotal running time: %02d:%02d:%02d\n\n", int($now / 3600), int(($now % 3600) / 60), int($now % 60));

    Read the article

  • How efficient is Python substring extraction?

    - by Cameron
    I've got the entire contents of a text file (at least a few KB) in string myStr. Will the following code create a copy of the string (less the first character) in memory? myStr = myStr[1:] I'm hoping it just refers to a different location in the same internal buffer. If not, is there a more efficient way to do this? Thanks! Note: I'm using Python 2.5.

    Read the article

  • computing hash values, integral types versus struct/class

    - by aaa
    hello I would like to know if there is a difference in speed between computing hash value (for example std::map key) of primitive integral type, such as int64_t and pod type, for example struct { int16_t v[4]; };. I know this is going to implementation specific, so my question ultimately pertains to gnu standard library. Thanks

    Read the article

  • Polymorphic Numerics on .Net and In C#

    - by Bent Rasmussen
    It's a real shame that in .Net there is no polymorphism for numbers, i.e. no INumeric interface that unifies the different kinds of numerical types such as bool, byte, uint, int, etc. In the extreme one would like a complete package of abstract algebra types. Joe Duffy has an article about the issue: http://www.bluebytesoftware.com/blog/CommentView,guid,14b37ade-3110-4596-9d6e-bacdcd75baa8.aspx How would you express this in C#, in order to retrofit it, without having influence over .Net or C#? I have one idea that involves first defining one or more abstract types (interfaces such as INumeric - or more abstract than that) and then defining structs that implement these and wrap types such as int while providing operations that return the new type (e.g. Integer32 : INumeric; where addition would be defined as public Integer32 Add(Integer32 other) { return Return(Value + other.Value); } I am somewhat afraid of the execution speed of this code but at least it is abstract. No operator overloading goodness... Any other ideas? .Net doesn't look like a viable long-term platform if it cannot have this kind of abstraction I think - and be efficient about it. Abstraction is reuse.

    Read the article

  • learning to type - tips for programmers?

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer?

    Read the article

  • Please for efficient query

    - by user278618
    I want to have efficient query to get some rows from my table. Here is I think the best presentation of my table. -Somedate is not duplicated - it is date of modifiedon -a,b,c are parent ids, let say countryCode -1,2,3,4 are subparent, let say citycode -guids are id of rows -true, false are values of rows - one can name this column - freshAir a 1 GUID somedate true a 1 GUID somedate true a 2 GUID somedate false a 2 GUID somedate false b 3 GUID somedate false b 3 GUID somedate false b 3 GUID somedate false b 4 GUID somedate false c 5 GUID somedate true c 6 GUID somedate true c 6 GUID somedate false c 6 GUID somedate false c 7 GUID somedate false I want the most recent rows MAX(modifiedon) grouped by countrycode and citycode and in this groups I need elements which have another values (true, false). And in result I want: a 1 GUID somedate true a 2 GUID somedate false c 5 GUID somedate true c 6 GUID somedate false c 7 GUID somedate false Look that in result I don't want to have records with "b", because all rows have the same value (false). Best regards

    Read the article

  • Python: How efficient is subtring extraction?

    - by Cameron
    I've got the entire contents of a text file (at least a few KB) in string myStr. Will the following code create a copy of the string (less the first character) in memory? myStr = myStr[1:] I'm hoping it just refers to a different location in the same internal buffer. If not, is there a more efficient way to do this? Thanks!

    Read the article

  • How can I efficiently group a large list of URLs by their host name in Perl?

    - by jesper
    I have text file that contains over one million URLs. I have to process this file in order to assign URLs to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600 MB of RAM to do this (size of file is about 300 MB). Could you provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and puts the url into a hash. EDIT Here is my implementation (I've cut off irrelevant things): while($line = <STDIN>) { chomp($line); $line =~ /(http:\/\/.+?)(\/|$)/i; $host = "$1"; push @{$urls{$host}}, $line; } store \%urls, 'out.hash';

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >