Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 47/184 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Performance of Java matrix math libraries?

    - by dfrankow
    We are computing something whose runtime is bound by matrix operations. (Some details below if interested.) This experience prompted the following question: Do folk have experience with the performance of Java libraries for matrix math (e.g., multiply, inverse, etc.)? For example: JAMA: http://math.nist.gov/javanumerics/jama/ COLT: http://acs.lbl.gov/~hoschek/colt/ Apache commons math: http://commons.apache.org/math/ I searched and found nothing. Details of our speed comparison: We are using Intel FORTRAN (ifort (IFORT) 10.1 20070913). We have reimplemented it in Java (1.6) using Apache commons math 1.2 matrix ops, and it agrees to all of its digits of accuracy. (We have reasons for wanting it in Java.) (Java doubles, Fortran real*8). Fortran: 6 minutes, Java 33 minutes, same machine. jvisualm profiling shows much time spent in RealMatrixImpl.{getEntry,isValidCoordinate} (which appear to be gone in unreleased Apache commons math 2.0, but 2.0 is no faster). Fortran is using Atlas BLAS routines (dpotrf, etc.). Obviously this could depend on our code in each language, but we believe most of the time is in equivalent matrix operations. In several other computations that do not involve libraries, Java has not been much slower, and sometimes much faster.

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • Precision of cos(atan2(y,x)) versus using complex <double>, C++

    - by Ivan
    Hi all, I'm writing some coordinate transformations (more specifically the Joukoswky Transform, Wikipedia Joukowsky Transform), and I'm interested in performance, but of course precision. I'm trying to do the coordinate transformations in two ways: 1) Calculating the real and complex parts in separate, using double precision, as below: double r2 = chi.x*chi.x + chi.y*chi.y; //double sq = pow(r2,-0.5*n) + pow(r2,0.5*n); //slow!!! double sq = sqrt(r2); //way faster! double co = cos(atan2(chi.y,chi.x)); double si = sin(atan2(chi.y,chi.x)); Z.x = 0.5*(co*sq + co/sq); Z.y = 0.5*si*sq; where chi and Z are simple structures with double x and y as members. 2) Using complex : Z = 0.5 * (chi + (1.0 / chi)); Where Z and chi are complex . There interesting part is that indeed the case 1) is faster (about 20%), but the precision is bad, giving error in the third decimal number after the comma after the inverse transform, while the complex gives back the exact number. So, the problem is on the cos(atan2), sin(atan2)? But if it is, how the complex handles that? Thanks!

    Read the article

  • Grails or Play! for an ex-RoR developer ?

    - by Kedare
    Hello, I plan to begin learning a Java web framework (I love the Java API), I already used Rails and Django. I want something close to Java, but without all the complexity of J2EE. I've found 2 framework that could be good for me : Grails : Grails looks great, it use Groovy that is better than Java for web application (I think..), but it's slower, that use pure-java components (Hibernate, Strut, Spring), it looks pretty simple to deploy (send .war and it's ok !), the GSP is great ! It's a bit harder to debug (need to restart the server at each modification, and the stacktrace is a mix of Java and Groovy stack that is not always understandable..) Play! : This framework also looks great, it's faster than Grails (It's use directly Java), but I don't really like how it use Java, it modify the source code to transform the properties call as setXXX/getXXX, I'm not kind of that... The framework also have caching function that Grails don't (alreary) has. I don't really like the Template Engine. It's also easer to debug (no need to restart the server, and the stacktrace is clear) What do you recommend for ? I am looking for something easy to learn (I used a lot ruby and java, but a little bit java (But I love the Java API)), that is full featured (That's no a problem with all the Java Library availables, but if it's bundle and integrated I prefer), that scale and that is not too slow (faster than ruby), and if possible I would want something with a decent community to easily find support and answer to my questions ;) PS: No JRuby on Rails Thank you !

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • wxGraphicsContext dreadfully slow on Windows

    - by Jonatan
    I've implemented a plotter using wxGraphicsContext. The development was done using wxGTK, and the graphics was very fast. Then I switched to Windows (XP) using wxWidgets 2.9.0. And the same code is extremely slow. It takes about 350 ms to render a frame. Since the user is able to drag the plotter with the mouse to navigate it feels very sluggish with such a slow update rate. I've tried to implement some parts using wxDC and benchmarked the difference. With wxDC the code runs just about 100 times faster. As far as I know both Cairo and GDI+ are implemented in software at this point, so there's no real reason Cairo should be so much faster than GDI+. Am I doing something wrong? Or is the GDI+ implementation just not up on par with Cairo? One small note: I'm rendering to a wxBitmap now, with the wxGraphicsContext created from a wxMemoryDC. This is to avoid flicker on XP, since double buffering doesn't work there.

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • Trying to write up a C daemon, but don't know enough C to continue

    - by JamesM-SiteGen
    Okay, so I want this daemon to run in the background with little to no interaction. I plan to have it work with Apache, lighttpd, etc to send the session & request information to allow C to generate a website from an object DB, saving will have to be an option so, you can start the daemon with an existing DB but will not save to it unless you login to the admin area and enable, and restart the daemon. Summary of the daemon: Load a database from file. Have a function to restart the daemon. Allow Apache, lighttpd, etc to get necessary data about the request and session. A varible to allow the database to be saved to the file, otherwise it will only be stored in ram. If it is set to save back to the file, then only have neccessary data in ram. Use sql-light for the database file. Build a webpage from some template files. $(myVar) for getting variables. Get templates from a directory. ./templates/01-test/{index.html,template.css,template.js} Live version of the code and more information: http://typewith.me/YbGB1h1g1p Also I am working on a website CMS in php, but I am tring to switch to C as it is faster than php. ( php is quite fast, but the fact that making a few mySQL requests for every webpage is quite unefficent and I'm sure that it can be far better, so an object that we can recall data from in C would have to be faster ) P.S I am using Arch-Linux not MS-Windows, with the package group base-devel for the common developer tools such as make and makepgk. Edit: Oupps, forgot the question ;) Okay, so the question is, how can I turn this basic C daemon into a base to what I am attempting to do here?

    Read the article

  • Using `<List>` when dealing with pointers in C#.

    - by Gorchestopher H
    How can I add an item to a list if that item is essentially a pointer and avoid changing every item in my list to the newest instance of that item? Here's what I mean: I am doing image processing, and there is a chance that I will need to deal with images that come in faster than I can process (for a short period of time). After this "burst" of images I will rely on the fact that I can process faster than the average image rate, and will "catch-up" eventually. So, what I want to do is put my images into a <List> when I acquire them, then if my processing thread isn't busy, I can take an image from that list and hand it over. My issue is that I am worried that since I am adding the image "Image1" to the list, then filling "Image1" with a new image (during the next image acquisition) I will be replacing the image stored in the list with the new image as well (as the image variable is actually just a pointer). So, my code looks a little like this: while (!exitcondition) { if(ImageAvailabe()) { Image1 = AcquireImage(); ImgList.Add(Image1); } if(ImgList.Count 0) { ProcessEngine.NewImage(ImgList[0]); ImgList.RemoveAt(0); } } Given the above, how can I ensure that: - I don't replace all items in the list every time Image1 is modified. - I don't need to pre-declare a number of images in order to do this kind of processing. - I don't create a memory devouring monster. Any advice is greatly appreciated.

    Read the article

  • Improve Log Exceptions

    - by Jaider
    I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging. Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record. If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship. Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us. I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique. This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?

    Read the article

  • Java Collection performance question

    - by Shervin
    I have created a method that takes two Collection<String> as input and copies one to the other. However, I am not sure if I should check if the collections contain the same elements before I start copying, or if I should just copy regardless. This is the method: /** * Copies from one collection to the other. Does not allow empty string. * Removes duplicates. * Clears the too Collection first * @param target * @param dest */ public static void copyStringCollectionAndRemoveDuplicates(Collection<String> target, Collection<String> dest) { if(target == null || dest == null) return; //Is this faster to do? Or should I just comment this block out if(target.containsAll(dest)) return; dest.clear(); Set<String> uniqueSet = new LinkedHashSet<String>(target.size()); for(String f : target) if(!"".equals(f)) uniqueSet.add(f); dest.addAll(uniqueSet); } Maybe it is faster to just remove the if(target.containsAll(dest)) return; Because this method will iterate over the entire collection anyways.

    Read the article

  • Efficient way to store order in mySQL for list of items

    - by ninumedia
    I want to code cleaner and more efficiently and I wanted to know any other suggestions for the following problem: I have a mySQL database that holds data about a set of photograph names. Oh, say 100 photograph names Table 1: (photos) has the following fields: photo_id, photo_name Ex data: 1 | sunshine.jpg 2 | cloudy.jpg 3 | rainy.jpg 4 | hazy.jpg ... Table 2: (categories) has the following fields: category_id, category_name, category_order Ex data: 1 | Summer Shots | 1,2,4 2 | Winter Shots | 2,3 3 | All Seasons | 1,2,3,4 ... Is it efficient to store the order of the photos in this manner per entry via comma delimited values? It's one approach I have seen used before but I wanted to know if something else is faster in run time. Using this way I don't think it is possible to do a direct INNER JOIN on the category table and photo table to get a single matched list of all the photographs per category. Ex: Summer shots - sunshine.jpg, cloudy.jpg, hazy.jpg because it was matched against 1,2,4 The iteration through all the categories and then the photos will have a O(n^2) and there has to be a better/faster way. Please educate me :)

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • Why is PHP discriminating between .php and .abc extensions for caching?

    - by Sam
    There seems to be a problem between how PHP engine handles identical files that differ only in their file extension. Problem: "An If-Modified-Since conditional request returned the full content unchanged." Also, I measured that the .php extension loads much faster than identitcal twin with .xxx extension even though the file contents are identical, and they differ only in their file extension. "HTTP allows clients to make conditional requests to see if a copy that they hold is still valid. Since this response has a Last-Modified header, clients should be able to use an If-Modified-Since request header for validation. RED has done this and found that the resource sends a full response even though it hadn't changed, indicating that it doesn't support Last-Modified validation." homepage ending with .php exact same file, but ending .ast Given: A home.php file is copied as home.xxx and this extension is added to htaccess to recognize it as a PHP file. The .php file listen to the php.ini where freshness is set to 3 hrs, the non .php files have to listen to htaccess where freshness is set to 2 hrs according to: AddType application/x-httpd-php .php .ast .abc .xxx .etc <IfModule mod_headers.c> ExpiresActive On ExpiresDefault M2419200 Header unset ETag FileETag None Header unset Pragma Header set Cache-Control "max-age=2419200" ##### DYNAMIC PAGES <FilesMatch "\\.(ast|php|abc|xxx)$"> ExpiresDefault M7200 Header set Cache-Control "public, max-age=7200" </FilesMatch> </IfModule> So far so good and everything loads, except, the non-php file doesn't cache properly, or it does cache well but doesn't validate well, to be more specific. See images enclosed. Only the non-php file extension causes the error and loads slower. The entire page.php loads faster as somehow all the elements in there then load properly from cache, while the page.abc has the full request returned while it ought to be cached, meaning the entire page is slower. Bottom line: What should be changed, in order eliminate the If-Modified-Since conditional request returning the full content unchanged?

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Fix N+1 query in "declarative_authorization" gem using gem "bullet"

    - by makaroni4
    Currently I am working on one big web application and to make it work faster I decided to refactor all N+1 queries (to decrease number of requests to database, http://rails-bestpractices.com/posts/29-fix-n-1-queries). So I installed gem "bullet" which doesn`t work with Rails 3.1.1 now (you can use fork from https://github.com/flyerhzm/bullet). When using declarative_authorization gem on each page I get same alerts: N+1 Query detected Role => [:permissions] Add to your finder: :include => [:permissions] N+1 Query detected Permission => [:permission_rules] Add to your finder: :include => [:permission_rules] CACHE (0.0ms) SELECT "roles".* FROM "roles" CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 1 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 2 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 3 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 4 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 6 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 7 CACHE (0.0ms) SELECT "permissions".* FROM "permissions" WHERE "permissions"."role_id" = 8 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 30 CACHE (0.0ms) SELECT "permission_rules".* FROM "permission_rules" INNER JOIN "permission_rules_permissions" ON "permission_rules"."id" = "permission_rules_permissions"."permission_rule_id" WHERE "permission_rules_permissions"."permission_id" = 31 ... Could you please help me with that and to make this queries faster?

    Read the article

  • Using static variables for Strings

    - by Vivart
    below content is taken from Best practice: Writing efficient code but i didn't understand why private static String x = "example"; faster than private static final String x ="example"; Can anybody explain this. Using static variables for Strings When you define static fields (also called class fields) of type String, you can increase application speed by using static variables (not final) instead of constants (final). The opposite is true for primitive data types, such as int. For example, you might create a String object as follows: private static final String x = "example"; For this static constant (denoted by the final keyword), each time that you use the constant, a temporary String instance is created. The compiler eliminates "x" and replaces it with the string "example" in the bytecode, so that the BlackBerry® Java® Virtual Machine performs a hash table lookup each time that you reference "x". In contrast, for a static variable (no final keyword), the String is created once. The BlackBerry JVM performs the hash table lookup only when it initializes "x", so access is faster. private static String x = "example"; You can use public constants (that is, final fields), but you must mark variables as private.

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Does breaking chained Select()s in LINQ to objects hurt performance?

    - by Justin
    Take the following pseudo C# code: using System; using System.Data; using System.Linq; using System.Collections.Generic; public IEnumerable<IDataRecord> GetRecords(string sql) { // DB logic goes here } public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; var ids = GetRecords(sql).Select(record => (record["EmployerID"] as int?) ?? 0); return ids.Select(employerID => new Employer(employerID) as IEmployer); } Would it be faster if the two Select() calls were combined? Is there an extra iteration in the code above? Is the following code faster? public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; return Query.Records(sql).Select(record => new Employer((record["EmployerID"] as int?) ?? 0) as IEmployer); } I think the first example is more readable if there is no difference in performance.

    Read the article

  • slow mysql count because of subselect

    - by frgt10
    how to make this select statement more faster? the first left join with the subselect is making it slower... mysql> SELECT COUNT(DISTINCT w1.id) AS AMOUNT FROM tblWerbemittel w1 JOIN tblVorgang v1 ON w1.object_group = v1.werbemittel_id INNER JOIN ( SELECT wmax.object_group, MAX( wmax.object_revision ) wmaxobjrev FROM tblWerbemittel wmax GROUP BY wmax.object_group ) AS wmaxselect ON w1.object_group = wmaxselect.object_group AND w1.object_revision = wmaxselect.wmaxobjrev LEFT JOIN ( SELECT vmax.object_group, MAX( vmax.object_revision ) vmaxobjrev FROM tblVorgang vmax GROUP BY vmax.object_group ) AS vmaxselect ON v1.object_group = vmaxselect.object_group AND v1.object_revision = vmaxselect.vmaxobjrev LEFT JOIN tblWerbemittel_has_tblAngebot wha ON wha.werbemittel_id = w1.object_group LEFT JOIN tblAngebot ta ON ta.id = wha.angebot_id LEFT JOIN tblLieferanten tl ON tl.id = ta.lieferant_id AND wha.zuschlag = (SELECT MAX(zuschlag) FROM tblWerbemittel_has_tblAngebot WHERE werbemittel_id = w1.object_group) WHERE w1.flags =0 AND v1.flags=0; +--------+ | AMOUNT | +--------+ | 1982 | +--------+ 1 row in set (1.30 sec) Some indexes has been already set and as EXPLAIN shows they were used. +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 2072 | | | 1 | PRIMARY | v1 | ref | werbemittel_group,werbemittel_id_index | werbemittel_group | 4 | wmaxselect.object_group | 2 | Using where | | 1 | PRIMARY | <derived3> | ALL | NULL | NULL | NULL | NULL | 3376 | | | 1 | PRIMARY | w1 | eq_ref | object_revision,or_og_index | object_revision | 8 | wmaxselect.wmaxobjrev,wmaxselect.object_group | 1 | Using where | | 1 | PRIMARY | wha | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 1 | PRIMARY | ta | eq_ref | PRIMARY | PRIMARY | 4 | dpd.wha.angebot_id | 1 | | | 1 | PRIMARY | tl | eq_ref | PRIMARY | PRIMARY | 4 | dpd.ta.lieferant_id | 1 | Using index | | 4 | DEPENDENT SUBQUERY | tblWerbemittel_has_tblAngebot | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 3 | DERIVED | vmax | index | NULL | object_revision_uq | 8 | NULL | 4668 | Using index; Using temporary; Using filesort | | 2 | DERIVED | wmax | range | NULL | or_og_index | 4 | NULL | 2168 | Using index for group-by | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ 10 rows in set (0.01 sec) The main problem while the statement above takes about 2 seconds seems to be the subselect where no index can be used. How to write the statement even more faster? Thanks for help. MT

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >