Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 52/255 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • slow mysql count because of subselect

    - by frgt10
    how to make this select statement more faster? the first left join with the subselect is making it slower... mysql> SELECT COUNT(DISTINCT w1.id) AS AMOUNT FROM tblWerbemittel w1 JOIN tblVorgang v1 ON w1.object_group = v1.werbemittel_id INNER JOIN ( SELECT wmax.object_group, MAX( wmax.object_revision ) wmaxobjrev FROM tblWerbemittel wmax GROUP BY wmax.object_group ) AS wmaxselect ON w1.object_group = wmaxselect.object_group AND w1.object_revision = wmaxselect.wmaxobjrev LEFT JOIN ( SELECT vmax.object_group, MAX( vmax.object_revision ) vmaxobjrev FROM tblVorgang vmax GROUP BY vmax.object_group ) AS vmaxselect ON v1.object_group = vmaxselect.object_group AND v1.object_revision = vmaxselect.vmaxobjrev LEFT JOIN tblWerbemittel_has_tblAngebot wha ON wha.werbemittel_id = w1.object_group LEFT JOIN tblAngebot ta ON ta.id = wha.angebot_id LEFT JOIN tblLieferanten tl ON tl.id = ta.lieferant_id AND wha.zuschlag = (SELECT MAX(zuschlag) FROM tblWerbemittel_has_tblAngebot WHERE werbemittel_id = w1.object_group) WHERE w1.flags =0 AND v1.flags=0; +--------+ | AMOUNT | +--------+ | 1982 | +--------+ 1 row in set (1.30 sec) Some indexes has been already set and as EXPLAIN shows they were used. +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 2072 | | | 1 | PRIMARY | v1 | ref | werbemittel_group,werbemittel_id_index | werbemittel_group | 4 | wmaxselect.object_group | 2 | Using where | | 1 | PRIMARY | <derived3> | ALL | NULL | NULL | NULL | NULL | 3376 | | | 1 | PRIMARY | w1 | eq_ref | object_revision,or_og_index | object_revision | 8 | wmaxselect.wmaxobjrev,wmaxselect.object_group | 1 | Using where | | 1 | PRIMARY | wha | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 1 | PRIMARY | ta | eq_ref | PRIMARY | PRIMARY | 4 | dpd.wha.angebot_id | 1 | | | 1 | PRIMARY | tl | eq_ref | PRIMARY | PRIMARY | 4 | dpd.ta.lieferant_id | 1 | Using index | | 4 | DEPENDENT SUBQUERY | tblWerbemittel_has_tblAngebot | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 3 | DERIVED | vmax | index | NULL | object_revision_uq | 8 | NULL | 4668 | Using index; Using temporary; Using filesort | | 2 | DERIVED | wmax | range | NULL | or_og_index | 4 | NULL | 2168 | Using index for group-by | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ 10 rows in set (0.01 sec) The main problem while the statement above takes about 2 seconds seems to be the subselect where no index can be used. How to write the statement even more faster? Thanks for help. MT

    Read the article

  • 2k rows update is very slow in MySQL

    - by sergeik
    Hi all, I have 2 tables: 1. news (450k rows) 2. news_tags (3m rows) There are some triggers on news table update which updating listings. This SQL executes too long... UPDATE news SET news_category = some_number WHERE news_id IN (SELECT news_id FROM news_tags WHERE tag_id = some_number); #about 3k rows How can I make it faster? Thanks in advance, S.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • postgres too slow

    - by Killercode
    Hi, I'm doing massive tests on a Postgres database... so basically I have 2 table where I inserted 40.000.000 records on, let's say table1 and 80.000.000 on table2 after this I deleted all those records. Now if I do SELECT * FROM table1 it takes 199000ms ? I can't understand what's happening? can anyone help me on this?

    Read the article

  • PHP – Slow String Manipulation

    - by Simon Roberts
    I have some very large data files and for business reasons I have to do extensive string manipulation (replacing characters and strings). This is unavoidable. The number of replacements runs into hundreds of thousands. It's taking longer than I would like. PHP is generally very quick but I'm doing so many of these string manipulations that it's slowing down and script execution is running into minutes. This is a pain because the script is run frequently. I've done some testing and found that str_replace is fastest, followed by strstr, followed by preg_replace. I've also tried individual str_replace statements as well as constructing arrays of patterns and replacements. I'm toying with the idea of isolating string manipulation operation and writing in a different language but I don't want to invest time in that option only to find that improvements are negligible. Plus, I only know Perl, PHP and COBOL so for any other language I would have to learn it first. I'm wondering how other people have approached similar problems? I have searched and I don't believe that this duplicates any existing questions.

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • Does jquery live slow down websites?

    - by chobo2
    Hi I have a problem I am using jquery U.I tabs that load everything with ajax. Now I have it right now everytime you click on a tab a partial view is loaded up into that tab. Now in this partial view their are javascript files that use jquery to bind all the events that are needed in that tab plus some jquery plugins I am using. Now every time that tab is loaded up all those scripts are loaded up. If it is clicked 10 times then those scripts are loaded 10times up meaing now each of say my buttons will now have 10 of the same events on it mean if someone clicks on that button 10 events will all fire off to and do the same thing. So I need to find some solution to either move all the script out and have it on the main page and use jquery.live or some other solution. I tried to do use jquery caching for the U.I tabs but this won't work since some things in say Tab A when changed effect Tab B meaning I need Tab B to be reloaded but the scripts can't reload otherwise I run into the same problem as now.

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • Computer/Application having trouble and getting slow while accessing Sql server through LAN, Network is fine

    - by user614297
    Hi, I have an web project in asp.net in one of my computers which are connected internally through LAN. The database is in Sql Server kept in another computer. The LAN is connected well. There is no connection error in my programming also. But sometimes in this particular system the page is taking long time to be open, sometime it is showing some exceptions(not everytime again and again). What can be the problem? How can it be solved? The same project is running good in other computers as well. My network seems ok, as I can acccess the computer hosting SQL server.

    Read the article

  • Google Chrome hardware acceleration making game run slow

    - by powerc9000
    So I have been working on a game in HTML5 canvas and noticed that the games lags and performs much slower when hardware acceleration is turned on in Google Chrome then when it is turned off. You can try for yourself here From doing some profiling I see that the problem lies in drawImage. More specifically drawing one canvas onto another. I do a lot of this. Hardware Acceleration on. Hardware Acceleration off. Is there something fundamental I am missing with one canvas to another? Why would the difference be that profound?

    Read the article

  • Database is very slow when creating an index

    - by kaliyaperumal M
    My database has around 25 core numbers, in that weekly basis I need to create an index and drop index. While creating the index it takes long time to complete, my log file also keeps on increasing, and when I delete some numbers from that table also taking too much time (because weekly basis I have to delete 30 to 50 lack numbers and add 30 to 40 lack new number also). Can u please give me the proper solution..

    Read the article

  • SQL running slow when i executes thru java

    - by user270885
    I am trying to execute a query in oracle db. When i try to run thru SQLTools the query is executing in 2 seconds and when i run the same query thru JAVA it is exectuting in more than a minute. I am using a hint /*+ordered use_nl (a, b)*/ I am using ojdbc6.jar Is it because of any JARS? Please help whats causing this?

    Read the article

  • Slow query execution time

    - by rotor
    select p.id,p.title,p.slug,p.content, (select url from gallery where postid=p.id limit 1) as url, t.name from posts as p inner join termrel as tr on (tr.object = p.id) inner join termtax as tx on (tx.id = tr.termtax_id) inner join terms as t on (t.id = tx.term_id) where tx.taxonomy_id=3 and p.post_status is null order by t.name asc This query took about 0.2407s to execute. How to make it fast?

    Read the article

  • Slow performance of query

    - by user642378
    Hi, I have asked the performance of query and i tried to simplyfy it.but still it not works.I am adding my query below.Please can you simplify it more effectively select r.parent_itemid f_id, parent_item.name f_name, parent_item.typeid f_typeid, parent_item.ownerid f_ownerid, parent_item.created f_created, parent_item.modifiedby f_modifiedby, parent_item.modified f_modified, pt.name f_tname, child_item.id i_id, t.name i_tname, child_item.typeid i_typeid, child_item.name i_name, child_item.ownerid i_ownerid, child_item.created i_created, child_item.modifiedby i_modifiedby, child_item.modified i_modified, r.ordinal i_ordinal from item child_item, type t, relation r, item parent_item, type pt where r.child_itemid = child_item.id and t.id=child_item.typeid and parent_item.id = r.parent_itemid and pt.id = parent_item.typeid and parent_item.id in ( select itemid from permission where itemid=parent_item.id and (holder_itemid in (10,100) and level > 0) ) order by r.parent_itemid, r.relation_typeid, r.ordinal Thanks you regards jennie

    Read the article

  • Are there well-known examples of web products that were killed by slow service?

    - by Jeremy Wadhams
    It's a basic tenet of UX design that users prefer fast pages. http://www.useit.com/alertbox/response-times.html http://www.nytimes.com/2012/03/01/technology/impatient-web-users-flee-slow-loading-sites.html?pagewanted=all It's supposedly even baked into Google's ranking algorithm now: fast sites rank higher, all else being equal. But are there well known examples of web services where the popular narrative is "it was great, but it was so slow people took their money elsewhere"? I can pretty easily think of example problems with scale (Twitter's fail whale) or reliability (Netflix and Pinterest outages caused by a single datacenter in a storm). But can (lack of) speed really kill?

    Read the article

  • Compiz runs almost at 100% and the system is slow, what can I do?

    - by Heiner Valverde
    My system became very slow out of the sudden, yesterday Compiz was running extremely smooth, today it started working very slow and slowing the computer. What I've done so far was to resize my swap partition to 6 gigabytes (my computer has 3 gigabytes of RAM), before it was on 5.1 gigabytes, so I though that was the reason of this but still not. Also I tested running only metacity by running metacity --replace and also with Mutter. With metacity works really great no problem but, in the other hand, with mutter the computer works slower than running compiz instead. I am using the Nvidia driver version 173.14.28 and my X Server version as reports the NVidia X Server Settings is the 11.0. My Linux kernel running on this computer is 2.6.35-25-generic and my ubuntu version is 10.10. Any help will be appreciated.

    Read the article

  • How to debug slow session start of Gnome 3?

    - by user65521
    After Upgrade from 11.10 to 12.04, the login process of Gnome 3 is extremely slow (It takes in the order of 60 seconds when it was in the order of a few seconds before the upgrade (Harddisk is a SSD!)). Running "top" in a VT shows that gnome-shell is producing about 90% CPU load while dbus-daemon is taking roughly 10%. The moment when CPU-load of gnome-shell drops to normal levels (around 2-3%) corresponds to the time the login process is terminated and the desktop is displayed. De-activating the four gnome-shell extensions (Alternative Status Menu, Quit Button, Remove Accessibility, system-monitor) that I have installed does not have any effect on session start up time. Login to Gnome classic does not show the slow session start. The system logs do not show anything suspicious. Thus, what is the best way to identify the underlying problem?

    Read the article

  • Ruby using the Gosu framework: why it runs slow first time?

    - by Omega
    I'm creating a Ruby game using the Gosu framework. All good. Sometimes, when I run the game, it has some kind of slow startup, and probably it will be rather slow during the whole game. So I close it and... open it again. It is very likely that it will startup quickly and the whole game will run smoothly and fast. Why is that? What is this phenomenon? Is it faster because of some cache stored or whatever since the first run? (But why would cache be stored? If the app dies, I would expect no references at all etc...) Ruby, Windows 7.

    Read the article

  • best way of rendering more 3D models in three.js that not slow down page?

    - by GDevLearner
    I am in the way of creating a 3D web game using threeJS library. This is a multi-player game that players are 3D human models in game, and I need to add a human 3D model for each player that enters the game. Additionally, I want to animate the humans while they walking, but the problem, here is that adding a 3D model and animating that for each player will slow down the game or maybe cause the browser to crash. question: what is the better way of showing and animating the player's models that will not slow down the game?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >