Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 38/811 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • SQL Server Express performance issue

    - by Developer IT
    Hi folks ! I know my questions will sound silly and probably nobody will have perfect answer but since I am in a complete dead-end with the situation it will make me feel better to post it here. So... I have a SQL Server Express database that's 500 Mb. It contains 5 tables and maybe 30 stored procedure. This database is use to store articles and is use for the Developer It web site. Normally the web pages load quickly, let's say 2 ou 3 sec. BUT, sqlserver process uses 100% of the processor for those 2 or 3 sec. I try to find which stored procedure was the problem and I could not find one. It seems like every read into the table dans contains the articles (there are about 155,000 of them and 20 or so gets added every 15 minutes). I added few index but without luck... It is because the table is full text indexed ? Should I have order with the primary key instead of date ? I never had any problems with ordering by dates.... Should I use dynamic SQL ? Should I add the primary key into the url of the articles ? Should I use mutiple indexes for seperate columns or one big index ? I you want more details or code bits, just ask for it. Basicly, every little hint is much apreciated. Thanks.

    Read the article

  • Delphi Performance: Case Versus If

    - by Andreas Rejbrand
    I guess there might be some overlapping with previous SO questions, but I could not find a Delphi-specific question on this topic. Suppose that you want to check if an unsigned 32-bit integer variable "MyAction" is equal to any of the constants ACTION1, ACTION2, ... ACTIONn, where n is - say 1000. I guess that, besides being more elegant, case MyAction of ACTION1: {code}; ACTION2: {code}; ... ACTIONn: {code}; end; if much faster than if MyAction = ACTION1 then // code else if MyAction = ACTION2 then // code ... else if MyAction = ACTIONn then // code; I guess that the if variant takes time O(n) to complete (i.e. to find the right action) if the right action ACTIONi has a high value of i, whereas the case variant takes a lot less time (O(1)?). Am I correct that switch is much faster? Am I correct that the time required to find the right action in the switch case actually is independent of n? I.e. is it true that it does not really take any longer to check a million cases than to check 10 cases? How, exactly, does this work?

    Read the article

  • Javascript fine grain performance tweaking

    - by thermal7
    I have been writing my first jQuery plugin and struggling to find a means to time how long different pieces of code take to run. I can use firebug and console.time/profile. However, it seems that because my code executes so fast I get no results with profile and with time it spits out 0ms. (http://stackoverflow.com/questions/2690697/firebug-profiling-issue-no-activity-to-profile/2690846#2690846) Is there a way to get the time at a greater level of detail that milliseconds in javascript?

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • mysql statement with nested SELECT - how to improve performance

    - by ernie
    This statement appears inefficient because only one one out of 10 records are selected and only 1 of 100 entries contain comments. What can I do to improve it ? $query = "SELECT A,B,C, (SELECT COUNT(*) FROM comments WHERE comments.nid = header_file.nid) as my_comment_count FROM header_file Where A = 'admin' " edit: I want header records even if no comments are found.

    Read the article

  • Performance: float to int cast and clipping result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT) EDIT The cast/clipping code above is (not surprisingly) in the body of a loop that reads float values from a float[] and puts them into a byte[]. I have a test suite that measures total runtime on several test cases (processing about 200MB of raw audio data). The 6% were concluded from the runtime difference when the cast assignment "int sample = (int) floatVal" was replaced by assigning the loop index to sample. EDIT @leopoldkot: I'm aware of the truncation in Java, as stated in the original question (F2I, I2S bytecode sequence). I only tried the cast to short because I assumed that Java had an F2S bytecode, which it unfortunately does not (comming originally from an 68K assembly background, where a simple "fmove.w FP0, D0" would have done exactly what I wanted).

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • Cassandra performance slow down with counter column

    - by tubcvt
    I have a cluster (4 node ) and a node have 16 core and 24 gb ram: 192.168.23.114 datacenter1 rack1 Up Normal 44.48 GB 25.00% 192.168.23.115 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.116 datacenter1 rack1 Up Normal 44.51 GB 25.00% 192.168.23.117 datacenter1 rack1 Up Normal 44.51 GB 25.00% We use about 10 column family (counter column) to make some system statistic report. Problem on here is that When i set replication_factor of this keyspace from 1 to 2 (contain 10 counter column family ), all cpu of node increase from 10% ( when use replication factor=1) to --- 90%. :( :( who can help me work around that :( . why counter column consume too much cpu time :(. thanks all

    Read the article

  • Java Collection performance question

    - by Shervin
    I have created a method that takes two Collection<String> as input and copies one to the other. However, I am not sure if I should check if the collections contain the same elements before I start copying, or if I should just copy regardless. This is the method: /** * Copies from one collection to the other. Does not allow empty string. * Removes duplicates. * Clears the too Collection first * @param target * @param dest */ public static void copyStringCollectionAndRemoveDuplicates(Collection<String> target, Collection<String> dest) { if(target == null || dest == null) return; //Is this faster to do? Or should I just comment this block out if(target.containsAll(dest)) return; dest.clear(); Set<String> uniqueSet = new LinkedHashSet<String>(target.size()); for(String f : target) if(!"".equals(f)) uniqueSet.add(f); dest.addAll(uniqueSet); } Maybe it is faster to just remove the if(target.containsAll(dest)) return; Because this method will iterate over the entire collection anyways.

    Read the article

  • What performance overhead do IoC containers involve?

    - by Sosh
    Hi, Loose coupling is wonderful of course, but I have often wondered what overhead wiring up dynamically using an IoC container (for example Castle Windsor) has over a tightly coupled system? I know that a detailed answer would depend on what the IoC was being used for, but I'm really just trying to get a feel for the magnitude of effort involved in the IoC work. Does anyone have any stats or other resources regarding this? Thanks

    Read the article

  • Performance of C# method polymorphism with generics

    - by zildjohn01
    I noticed in C#, unlike C++, you can combine virtual and generic methods. For example: using System.Diagnostics; class Base { public virtual void Concrete() {Debug.WriteLine("base concrete");} public virtual void Generic<T>() {Debug.WriteLine("base generic");} } class Derived : Base { public override void Concrete() {Debug.WriteLine("derived concrete");} public override void Generic<T>() {Debug.WriteLine("derived generic");} } class App { static void Main() { Base x = new Derived(); x.Concrete(); x.Generic<PerformanceCounter>(); } } Given that any number of versions of Generic<T> could be instantiated, it doesn't look like the standard vtbl approach could be used to resolve method calls, and in fact it's not. Here's the generated code: x.Concrete(); mov ecx,dword ptr [ebp-8] mov eax,dword ptr [ecx] call dword ptr [eax+38h] x.Generic<PerformanceCounter>(); push 989A38h mov ecx,dword ptr [ebp-8] mov edx,989914h call 76A874F1 mov dword ptr [ebp-4],eax mov ecx,dword ptr [ebp-8] call dword ptr [ebp-4] The extra code appears to be looking up a dynamic vtbl according to the generic parameters, and then calling into it. Has anyone written about the specifics of this implementation? How well does it perform compared to the non-generic case?

    Read the article

  • Predicting performance for an iPhone/iPod Touch App

    - by Avizz
    I don't have an iPhone Developer Program Account yet and will be getting one in the next couple of days. Can instruments be used with the simulator to give a rough estimate on how well my app may perform? Using instruments I checked and fixed all the leaks it was detecting, and it appears that my memory usage maxes out at about 5.77mb. Is there any other tests I could perform with instruments to judge how well my app would perform? I realize there is no way other then the actual device to get a definite answer, it would be nice to get an estimate.

    Read the article

  • Visual Studio 2008 awful performance

    - by Nima
    Hi, I have ported a piece of C++ code, that works out of core, from Linux(Ubuntu) to Windows(Vista) and I realized that it works about 50times slower on VS2008! I removed all the out of core parts and now I just have a piece of code that has nothing to do with the hard disk. I set compiler parameters to O2 in Project Properties but still get about 10times slower than g++ in linux! Does anybody have an idea why it is this much slower under VS? I really appreciate any kind of hint! Thanks,

    Read the article

  • Performance Problem with Clojure Array

    - by dbyrne
    This piece of code is very slow. Execution from the slime-repl on my netbook takes a couple minutes. Am I doing something wrong? (def test-array (make-array Integer/TYPE 400 400 3)) (doseq [x (range 400), y (range 400), z (range 3)] (aset test-array x y z 0))

    Read the article

  • Performance with timestamp conditions

    - by Tim Whitlock
    Which of the following is faster, or are they equivalent? (grabbing recent most records from a TIMESTAMP COLUMN) SELECT UNIX_TIMESTAMP(`modified`) stamp FROM `some_table` HAVING stamp > 127068799 ORDER BY stamp DESC or SELECT UNIX_TIMESTAMP(`modified`) stamp FROM `some_table` WHERE UNIX_TIMESTAMP(`modified`) > 127068799 ORDER BY `modified` DESC or even another combination?

    Read the article

  • weird performance in C++ (VC 2010)

    - by raicuandi
    Hello, I have this loop written in C++, that compiled with MSVC2010 takes a long time to run. (300ms) for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); float val = 0; for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { val += original[k*w + m] * ds[k - i + hr][m - j + hr]; } } heat_map[i*w + j] = val; } } } It seemed a bit strange to me, so I did some tests then changed a few bits to inline assembly: (specifically, the code that sums "val") for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); __asm { fldz } for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { float val = original[k*w + m] * ds[k - i + hr][m - j + hr]; __asm { fld val fadd } } } float val1; __asm { fstp val1 } heat_map[i*w + j] = val1; } } } Now it runs in half the time, 150ms. It does exactly the same thing, but why is it twice as quick? In both cases it was run in Release mode with optimizations on. Am I doing anything wrong in my original C++ code?

    Read the article

  • Usual hibernate performance pitfall

    - by Antoine Claval
    Hi, We have just finish to profile our application. ( she's begin to be slow ). the problem seems to be "in hibernate". It's a legacy mapping. Who work's, and do it's job. The relational shema behind is ok too. But some request are slow as hell. So, we would appreciate any input on common and usual mistake made with hibernate who end up with slow response. Exemple : Eager in place of Lazy can change dramaticly the response time....

    Read the article

  • Improve SQL Server 2005 Query Performance

    - by user366810
    I have a course search engine and when I try to do a search, it takes too long to show search results. You can try to do a search here http://76.12.87.164/cpd/testperformance.cfm At that page you can also see the database tables and indexes, if any. I'm not using Stored Procedures - the queries are inline using Coldfusion. I think I need to create some indexes but I'm not sure what kind (clustered, non-clustered) and on what columns. Thanks

    Read the article

  • Poor performance / speed of regex with lookahead

    - by Hugo Zaragoza
    I have been observing extremely slow execution times with expressions with several lookaheads. I suppose that this is due to underlying data structures, but it seems pretty extreme and I wonder if I do something wrong or if there are known work-arounds. The problem is determining if a set of words are present in a string, in any order. For example we want to find out if two terms "term1" AND "term2" are somewhere in a string. I do this with the expresion: (?=.*\bterm1\b)(?=.*\bterm2\b) But what I observe is that this is an order of magnitude slower than checking first just \bterm1\b and just then \bterm2\b This seems to indicate that I should use an array of patterns instead of a single pattern with lookaheads... is this right? it seems wrong... Here is an example test code and resulting times: public static void speedLookAhead() { Matcher m, m1, m2; boolean find; int its = 1000000; // create long non-matching string char[] str = new char[2000]; for (int i = 0; i < str.length; i++) { str[i] = 'x'; } String test = str.toString(); // First method: use one expression with lookaheads m = Pattern.compile("(?=.*\\bterm1\\b)(?=.*\\bterm2\\b)").matcher(test); long time = System.currentTimeMillis(); ; for (int i = 0; i < its; i++) { m.reset(test); find = m.find(); } time = System.currentTimeMillis() - time; System.out.println(time); // Second method: use two expressions and AND the results m1 = Pattern.compile("\\bterm1\\b").matcher(test); m2 = Pattern.compile("\\bterm2\\b").matcher(test); time = System.currentTimeMillis(); ; for (int i = 0; i < its; i++) { m1.reset(test); m2.reset(test); find = m1.find() && m2.find(); } time = System.currentTimeMillis() - time; System.out.println(time); } This outputs in my computer: 1754 150

    Read the article

  • SQL Native Client 10 Performance miserable (due to server-side cursors)

    - by namezero
    we have an application that uses ODBC via CDatabase/CRecordset in MFC (VS2010). We have two backends implemented. MSSQL and MySQL. Now, when we use MSSQL (with the Native Client 10.0), retrieving records with SELECT is dramatically slow via slow links (VPN, for example). The MySQL ODBC driver does not exhibit this nasty behavior. For example: CRecordset r(&m_db); r.Open(CRecordset::snapshot, L"SELECT a.something, b.sthelse FROM TableA AS a LEFT JOIN TableB AS b ON a.ID=b.Ref"); r.MoveFirst(); while(!r.IsEOF()) { // Retrieve CString strData; crs.GetFieldValue(L"a.something", strData); crs.MoveNext(); } Now, with the MySQL driver, everything runs as it should. The query is returned, and everything is lightning fast. However, with the MSSQL Native Client, things slow down, because on every MoveNext(), the driver communicates with the server. I think it is due to server-side cursors, but I didn't find a way to disable them. I have tried using: ::SQLSetConnectAttr(m_db.m_hdbc, SQL_ATTR_ODBC_CURSORS, SQL_CUR_USE_ODBC, SQL_IS_INTEGER); But this didn't help either. There are still long-running exec's to sp_cursorfetch() et al in SQL Profiler. I have also tried a small reference project with SQLAPI and bulk fetch, but that hangs in FetchNext() for a long time, too (even if there is only one record in the resultset). This however only happens on queries with LEFT JOINS, table-valued functions, etc. Note that the query doesn't take that long - executing the same SQL via SQL Studio over the same connection returns in a reasonable time. Question1: Is is possible to somehow get the native client to "cache" all results locally use local cursors in a similar fashion as the MySQL driver seems to do it? Maybe this is the wrong approach altogether, but I'm not sure how else to do this. All we want is to retrieve all data at once from a SELECT, then never talk the server again until the next query. We don't care about recordset updates, deletes, etc or any of that nonsense. We only want to retrieve data. We take that recordset, get all the data, and delete it. Question2: Is there a more efficient way to just retrieve data in MFC with ODBC?

    Read the article

  • Java: Calculate distance between a large number of locations and performance

    - by Ally
    I'm creating an application that will tell a user how far away a large number of points are from their current position. Each point has a longitude and latitude. I've read over this article http://www.movable-type.co.uk/scripts/latlong.html and seen this post http://stackoverflow.com/questions/837872/calculate-distance-in-meters-when-you-know-longitude-and-latitude-in-java There are a number of calculations (50-200) that need carried about. If speed is more important than the accuracy of these calculations, which one is best?

    Read the article

  • FOR loop performance in Javascript

    - by AndrewMcLagan
    As my research leads me to believe that for loops are the fastest iteration construct in javascript language. I was thinking that also declaring a conditional length value for the for loop would be faster... to make it clearer, which of the following do you think would be faster? Example ONE for(var i = 0; i < myLargeArray.length; i++ ) { console.log(myLargeArray[i]); } Example TWO var count = myLargeArray.length; for(var i = 0; i < count; i++ ) { console.log(myLargeArray[i]); } my logic follows that on each iteration in example one accessing the length of myLargeArray on each iteration is more computationally expensive then accessing a simple integer value as in example two?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >