Search Results

Search found 7671 results on 307 pages for 'slow browsing'.

Page 52/307 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Batch insert mode with hibernate and oracle: seems to be dropping back to slow mode silently

    - by Chris
    I'm trying to get a batch insert working with Hibernate into Oracle, according to what i've read here: http://docs.jboss.org/hibernate/core/3.3/reference/en/html/batch.html , but with my benchmarking it doesn't seem any faster than before. Can anyone suggest a way to prove whether hibernate is using batch mode or not? I hear that there are numerous reasons why it may silently drop into normal mode (eg associations and generated ids) so is there some way to find out why it has gone non-batch? My hibernate.cfg.xml contains this line which i believe is all i need to enable batch mode: <property name="jdbc.batch_size">50</property> My insert code looks like this: List<LogEntry> entries = ..a list of 100 LogEntry data classes... Session sess = sessionFactory.getCurrentSession(); for(LogEntry e : entries) { sess.save(e); } sess.flush(); sess.clear(); My 'logentry' class has no associations, the only interesting field is the id: @Entity @Table(name="log_entries") public class LogEntry { @Id @GeneratedValue public Long id; ..other fields - strings and ints... However, since it is oracle, i believe the @GeneratedValue will use the sequence generator. And i believe that only the 'identity' generator will stop bulk inserts. So if anyone can explain why it isn't running in batch mode, or how i can find out for sure if it is or isn't in batch mode, or find out why hibernate is silently dropping back to slow mode, i'd be most grateful. Thanks

    Read the article

  • VSTO Outlook - Contact iteration is SO SLOW!

    - by DustinDavis
    I'm working on an outlook add-in and I have a dialog window that allows the user to select contacts. I havent been able to find a way to use the outlook contact window so I am looping through the ContactFolder.Items and doing my work that way. The problem is that I have to handle up to 70K contacts. I tried multi-threading and many other things but it is just so slow. It takes 15 seconds to load 30k contacts. I can load and bind 500k POCO objects in milliseconds but when I need to get the contact items from outlook it just takes forever. The problem seems to be when you actually need to get a property from the contactitem it has to fetch it from the database or something. Is there a contact cache I can pull from? I only need Display and Email, nothing else. An ID would be nice but I don't need it. Can someone please tell me a better way of getting contacts from outlook or at least tell me how to open the outlook contact selection window? I was able to find code to open it but it wont let me because I'm showing a modal dialog and it wont open if there is a modal open.

    Read the article

  • Word frequency tally script is too slow

    - by Dave Jarvis
    Background Created a script to count the frequency of words in a plain text file. The script performs the following steps: Count the frequency of words from a corpus. Retain each word in the corpus found in a dictionary. Create a comma-separated file of the frequencies. The script is at: http://pastebin.com/VAZdeKXs Problem The following lines continually cycle through the dictionary to match words: for i in $(awk '{if( $2 ) print $2}' frequency.txt); do grep -m 1 ^$i\$ dictionary.txt >> corpus-lexicon.txt; done It works, but it is slow because it is scanning the words it found to remove any that are not in the dictionary. The code performs this task by scanning the dictionary for every single word. (The -m 1 parameter stops the scan when the match is found.) Question How would you optimize the script so that the dictionary is not scanned from start to finish for every single word? The majority of the words will not be in the dictionary. Thank you!

    Read the article

  • FreeText COUNT query on multiple tables is super slow

    - by Eric P
    I have two tables: **Product** ID Name SKU **Brand** ID Name Product table has about 120K records Brand table has 30K records I need to find count of all the products with name and brand matching a specific keyword. I use freetext 'contains' like this: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE (contains(Product.Name, 'pants') or contains(Brand.Name, 'pants')) This query takes about 17 secs. I rebuilt the FreeText index before running this query. If I only check for Product.Name. They query is less then 1 sec. Same, if I only check the Brand.Name. The issue occurs if I use OR condition. If I switch query to use LIKE: SELECT count(*) FROM Product inner join Brand on Product.BrandID = Brand.ID WHERE Product.Name LIKE '%pants%' or Brand.Name LIKE '%pants%' It takes 1 secs. I read on MSDN that: http://msdn.microsoft.com/en-us/library/ms187787.aspx To search on multiple tables, use a joined table in your FROM clause to search on a result set that is the product of two or more tables. So I added an INNER JOINED table to FROM: SELECT count(*) FROM (select Product.Name ProductName, Product.SKU ProductSKU, Brand.Name as BrandName FROM Product inner join Brand on product.BrandID = Brand.ID) as TempTable WHERE contains(TempTable.ProductName, 'pants') or contains(TempTable.BrandName, 'pants') This results in error: Cannot use a CONTAINS or FREETEXT predicate on column 'ProductName' because it is not full-text indexed. So the question is - why OR condition could be causing such as slow query?

    Read the article

  • running multi threads in Java

    - by owca
    My task is to simulate activity of couple of persons. Each of them has few activities to perform in some random time: fast (0-5s), medium(5-10s), slow(10-20s) and very slow(20-30s). Each person performs its task independently in the same time. At the beginning of new task I should print it's random time, start the task and then after time passes show next task's time and start it. I've written run() function that counts time, but now it looks like threads are done one after another and not in the same time or maybe they're just printed in this way. public class People{ public static void main(String[] args){ Task tasksA[]={new Task("washing","fast"), new Task("reading","slow"), new Task("shopping","medium")}; Task tasksM[]={new Task("sleeping zzzzzzzzzz","very slow"), new Task("learning","slow"), new Task(" :** ","slow"), new Task("passing an exam","slow") }; Task tasksJ[]={new Task("listening music","medium"), new Task("doing nothing","slow"), new Task("walking","medium") }; BusyPerson friends[]={ new BusyPerson("Alice",tasksA), new BusyPerson("Mark",tasksM), new BusyPerson("John",tasksJ)}; System.out.println("STARTING....................."); for(BusyPerson f: friends) (new Thread(f)).start(); System.out.println("DONE........................."); } } class Task { private String task; private int time; private Task[]tasks; public Task(String t, String s){ task = t; Speed speed = new Speed(); time = speed.getSpeed(s); } public Task(Task[]tab){ Task[]table=new Task[tab.length]; for(int i=0; i < tab.length; i++){ table[i] = tab[i]; } this.tasks = table; } } class Speed { private static String[]hows = {"fast","medium","slow","very slow"}; private static int[]maxs = {5000, 10000, 20000, 30000}; public Speed(){ } public static int getSpeed( String speedString){ String s = speedString; int up_limit=0; int down_limit=0; int time=0; //get limits of time for(int i=0; i<hows.length; i++){ if(s.equals(hows[i])){ up_limit = maxs[i]; if(i>0){ down_limit = maxs[i-1]; } else{ down_limit = 0; } } } //get random time within the limits Random rand = new Random(); time = rand.nextInt(up_limit) + down_limit; return time; } } class BusyPerson implements Runnable { private String name; private Task[] person_tasks; private BusyPerson[]persons; public BusyPerson(String s, Task[]t){ name = s; person_tasks = t; } public BusyPerson(BusyPerson[]tab){ BusyPerson[]table=new BusyPerson[tab.length]; for(int i=0; i < tab.length; i++){ table[i] = tab[i]; } this.persons = table; } public void run() { int time = 0; double t1=0; for(Task t: person_tasks){ t1 = (double)t.time/1000; System.out.println(name+" is... "+t.task+" "+t.speed+ " ("+t1+" sec)"); while (time == t.time) { try { Thread.sleep(10); } catch(InterruptedException exc) { System.out.println("End of thread."); return; } time = time + 100; } } } } And my output : STARTING..................... DONE......................... Mark is... sleeping zzzzzzzzzz very slow (36.715 sec) Mark is... learning slow (10.117 sec) Mark is... :** slow (29.543 sec) Mark is... passing an exam slow (23.429 sec) Alice is... washing fast (1.209 sec) Alice is... reading slow (23.21 sec) Alice is... shopping medium (11.237 sec) John is... listening music medium (8.263 sec) John is... doing nothing slow (13.576 sec) John is... walking medium (11.322 sec) Whilst it should be like this : STARTING..................... DONE......................... John is... listening music medium (7.05 sec) Alice is... washing fast (3.268 sec) Mark is... sleeping zzzzzzzzzz very slow (23.71 sec) Alice is... reading slow (15.516 sec) John is... doing nothing slow (13.692 sec) Alice is... shopping medium (8.371 sec) Mark is... learning slow (13.904 sec) John is... walking medium (5.172 sec) Mark is... :** slow (12.322 sec) Mark is... passing an exam very slow (27.1 sec)

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • Is my fragment usage correct, seems to be slow on adnroid

    - by Robertoq
    My app structure is that i have a menu with 5 menu-point om the left side, and the content on the right side. MainActivity.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" > <fragment android:id="@+id/fragmentMenu" android:name="com.example.FragmentMenu" android:layout_width="@dimen/MenuWidth" android:layout_height="match_parent" /> <LinearLayout android:id="@+id/content" android:layout_width="match_parent" android:layout_height="match_parent" android_layout_toRightOf="@+id/fragmentMenu" android:orientation="vertical"/> </RelativeLayout> MainActivity.java public class FragmentActivityMain extends FragmentActivity { @Override protected void onCreate(final Bundle arg0) { super.onCreate(arg0); setContentView(R.layout.fragment_activity_main); FragmentManager fm = getSupportFragmentManager(); FragmentMenu fragmentMenu = (FragmentMenu) fm.findFragmentById(R.id.fragmentMenu); fragmentMenu.init(); } } And certainly I have a FragmenMenu class, public class FragmentMenu extends ListFragment { @Override public View onCreateView(final LayoutInflater inflater, final ViewGroup container, final Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_menu, container, false); return view; } public init() { FragmentManager fm = getFragmentManager(); FragmentTransaction ft = fm.beginTransaction(); FragmentNowListView lw = new FragmentCarListView(); ft.add(R.id.content, lw); ft.commit(); } } The FragmentCarList is a simple list, now with static test data, only five items in a List My Problem: Slow. I tested the app on my phone (Galaxy S3) and I see white screen when app starting, around 0,5 second and this is the log: 10-29 11:43:44.093: D/dalvikvm(29710): GC_CONCURRENT freed 267K, 5% free 13903K/14535K, paused 10ms+2ms 10-29 11:43:44.133: D/dalvikvm(29710): GC_FOR_ALLOC freed 215K, 6% free 13896K/14663K, paused 12ms 10-29 11:43:44.233: D/dalvikvm(29710): GC_FOR_ALLOC freed 262K, 6% free 13901K/14663K, paused 12ms 10-29 11:43:44.258: D/dalvikvm(29710): GC_FOR_ALLOC freed 212K, 6% free 13897K/14663K, paused 13ms 10-29 11:43:44.278: D/dalvikvm(29710): GC_FOR_ALLOC freed 208K, 6% free 13897K/14663K, paused 12ms 10-29 11:43:44.328: D/dalvikvm(29710): GC_FOR_ALLOC freed 131K, 4% free 14098K/14663K, paused 12ms 10-29 11:43:44.398: D/dalvikvm(29710): GC_CONCURRENT freed 20K, 3% free 14559K/14919K, paused 1ms+4ms And when I tested on Xperia Ray, the whit screen appear longer time. How can I optimize my fragments? Thx

    Read the article

  • web service filling gridview awfully slow, as is paging/sorting

    - by nat
    Hi I am making a page which calls a web service to fill a gridview this is returning alot of data, and is horribly slow. i ran the svcutil.exe on the wsdl page and it generated me the class and config so i have a load of strongly typed objects coming back from each request to the many service functions. i am then using LINQ to loop around the objects grabbing the necessary information as i go, but for each row in the grid i need to loop around an object, and grab another list of objects (from the same request) and loop around each of them.. 1 to many parent object child one.. all of this then gets dropped into a custom datatable a row at a time.. hope that makes sense.... im not sure there is any way to speed up the initial load. but surely i should be able to page/sort alot faster than it is doing. as at the moment, it appears to be taking as long to page/sort as it is to load initially. i thought if when i first loaded i put the datasource of the grid in the session, that i could whip it out of the session to deal with paging/sorting and the like. basically it is doing the below protected void Page_Load(object sender, EventArgs e) { //init the datatable //grab the filter vars (if there are any) WebServiceObj WS = WSClient.Method(args); //fill the datatable (around and around we go) foreach (ParentObject po in WS.ReturnedObj) { var COs = from ChildObject c in WS.AnotherReturnedObj where c.whatever.equals(...) ...etc foreach(ChildObject c in COs){ myDataTable.Rows.Add(tlo.this, tlo.that, c.thisthing, c.thatthing, etc......); } } grdListing.DataSource = myDataTable; Session["dt"] = myDataTable; grdListing.DataBind(); } protected void Listing_PageIndexChanging(object sender, GridViewPageEventArgs e) { grdListing.PageIndex = e.NewPageIndex; grdListing.DataSource = Session["dt"] as DataTable; grdListing.DataBind(); } protected void Listing_Sorting(object sender, GridViewSortEventArgs e) { DataTable dt = Session["dt"] as DataTable; DataView dv = new DataView(dt); string sortDirection = " ASC"; if (e.SortDirection == SortDirection.Descending) sortDirection = " DESC"; dv.Sort = e.SortExpression + sortDirection; grdListing.DataSource = dv.ToTable(); grdListing.DataBind(); } am i doing this totally wrongly? or is the slowness just coming from the amount of data being bound in/return from the Web Service.. there are maybe 15 columns(ish) and a whole load of rows.. with more being added to the data the webservice is querying from all the time any suggestions / tips happily received thanks

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • Remote Postgresql - extremely slow

    - by Muffinbubble
    Hi, I have setup PostgreSQL on a VPS I own - the software that accesses the database is a program called PokerTracker. PokerTracker logs all your hands and statistics whilst playing online poker. I wanted this accessible from several different computers so decided to installed it on my VPS and after a few hiccups I managed to get it connecting without errors. However, the performance is dreadful. I have done tons of research on 'remote postgresql slow' etc and am yet to find an answer so am hoping someone is able to help. Things to note: The query I am trying to execute is very small. Whilst connecting locally on the VPS, the query runs instantly. While running it remotely, it takes about 1 minute and 30 seconds to run the query. The VPS is running 100MBPS and then computer I'm connecting to it from is on an 8MB line. The network communication between the two is almost instant, I am able to remotely connect fine with no lag whatsoever and am hosting several websites running MSSQL and all the queries run instantly, whether connected remotely or locally so it seems specific to PostgreSQL. I'm running their newest version of the software and the newest compatible version of PostgreSQL with their software. The database is a new database, containing hardly any data and I've ran vacuum/analyze etc all to no avail, I see no improvements. I don't understand how MSSQL can query almost instantly yet PostgreSQL struggles so much. I am able to telnet to the post 5432 on the VPS IP with no problems, and as I say the query does execute it just takes an extremely long time. What I do notice is on the router when the query is running that hardly any bandwidth is being used - but then again I wouldn't expect it to for a simple query but am not sure if this is the issue. I've tried connecting remotely on 3 different networks now (including different routers) but the problem remains. Connecting remotely via another machine via the LAN is instant. I have also edited the postgre conf file to allow for more memory/buffers etc but I don't think this is the problem - what I am asking it to do is very simple - it shouldn't be intensive at all. Thanks, Ricky

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • Solving Slow Query

    - by Chris
    We are installing a new forum (yaf) for our site. One of the stored procedures is extremely slow - in fact it always times out in the browser. If I run it in MSSMS it takes nearly 10 minutes to complete. Is there a way to find out what part of this query if taking so long? The Query: DECLARE @BoardID int DECLARE @UserID int DECLARE @CategoryID int = null DECLARE @ParentID int = null SET @BoardID = 1 SET @UserID = 2 select a.CategoryID, Category = a.Name, ForumID = b.ForumID, Forum = b.Name, Description, Topics = [dbo].[yaf_forum_topics](b.ForumID), Posts = [dbo].[yaf_forum_posts](b.ForumID), Subforums = [dbo].[yaf_forum_subforums](b.ForumID, @UserID), LastPosted = t.LastPosted, LastMessageID = t.LastMessageID, LastUserID = t.LastUserID, LastUser = IsNull(t.LastUserName,(select Name from [dbo].[yaf_User] x where x.UserID=t.LastUserID)), LastTopicID = t.TopicID, LastTopicName = t.Topic, b.Flags, Viewing = (select count(1) from [dbo].[yaf_Active] x JOIN [dbo].[yaf_User] usr ON x.UserID = usr.UserID where x.ForumID=b.ForumID AND usr.IsActiveExcluded = 0), b.RemoteURL, x.ReadAccess from [dbo].[yaf_Category] a join [dbo].[yaf_Forum] b on b.CategoryID=a.CategoryID join [dbo].[yaf_vaccess] x on x.ForumID=b.ForumID left outer join [dbo].[yaf_Topic] t ON t.TopicID = [dbo].[yaf_forum_lasttopic](b.ForumID,@UserID,b.LastTopicID,b.LastPosted) where a.BoardID = @BoardID and ((b.Flags & 2)=0 or x.ReadAccess<>0) and (@CategoryID is null or a.CategoryID=@CategoryID) and ((@ParentID is null and b.ParentID is null) or b.ParentID=@ParentID) and x.UserID = @UserID order by a.SortOrder, b.SortOrder IO Statistics: Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Active'. Scan count 14, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_User'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Topic'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Category'. Scan count 0, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Forum'. Scan count 0, logical reads 488, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserGroup'. Scan count 231, logical reads 693, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_ForumAccess'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_AccessMask'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserForum'. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Client Statistics: Client Execution Time 11:54:01 Query Profile Statistics Number of INSERT, DELETE and UPDATE statements 0 0.0000 Rows affected by INSERT, DELETE, or UPDATE statements 0 0.0000 Number of SELECT statements 8 8.0000 Rows returned by SELECT statements 19 19.0000 Number of transactions 0 0.0000 Network Statistics Number of server roundtrips 3 3.0000 TDS packets sent from client 3 3.0000 TDS packets received from server 34 34.0000 Bytes sent from client 3166 3166.0000 Bytes received from server 128802 128802.0000 Time Statistics Client processing time 156478 156478.0000 Total execution time 572009 572009.0000 Wait time on server replies 415531 415531.0000 Execution Plan

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Rotating images makes ui slow

    - by 5w4rley
    i'm trying to implement kind of speedometer. i'm getting informations about rounds per minute, boost and load of an engine over bluetooth and i try to display them on the screen witch 3 arrows witch should point in the right direktion. i tried to use a rotate animation evry time i get data(10-100ms) to setup the arrows. but that makes my ui extremly slow. 500ms to react on a buttonclick. Doese someone know how to make it work better? source code: public void setTacho() { //rotate Tachonadel Rpmcurrentdegree=Rpmcurrentdegree+Rpmdegree; Rpmdegree=((rpms-lastrpm)*RPMtoDegree); RpmAnim=new RotateAnimation((float)Rpmcurrentdegree, (float)Rpmdegree, ivNadel.getWidth()/2, ivNadel.getHeight()/2); RpmAnim.setFillEnabled(true); RpmAnim.setFillAfter(true); ivNadel.setAnimation(RpmAnim); RpmAnim.start(); //rotate Boostbalken currentBoostDegree=currentBoostDegree+BoostDegree; BoostDegree=(boost-lastBoost)*BOOSTtoDegree; //rotate Loadbalken currentLoadDegree=currentLoadDegree+LoadDegree; LoadDegree=(load-lastLoad)*LOADtoDegree; BoostAnim=new RotateAnimation((float)-currentBoostDegree, (float)-BoostDegree, ivBoost.getWidth()/2, ivBoost.getHeight()/2); BoostAnim.setFillEnabled(true); BoostAnim.setFillAfter(true); ivBoost.setAnimation(BoostAnim); BoostAnim.start(); LoadAnim=new RotateAnimation((float)currentLoadDegree, (float)LoadDegree, ivLoad.getWidth()/2, ivLoad.getHeight()/2); LoadAnim.setFillEnabled(true); LoadAnim.setFillAfter(true); ivLoad.setAnimation(LoadAnim); LoadAnim.start(); } when i try to make the rotation only if the values have changed then it works only while they are changing but if they aren't the arrows jump back to the zero position. isnt setfillafter to tell the image that it should hold the new position? code: public void setTacho() { //rotate Tachonadel Rpmcurrentdegree=Rpmcurrentdegree+Rpmdegree; Rpmdegree=((rpms-lastrpm)*RPMtoDegree); if(Rpmdegree!=0) { RpmAnim=new RotateAnimation((float)Rpmcurrentdegree, (float)Rpmdegree, ivNadel.getWidth()/2, ivNadel.getHeight()/2); RpmAnim.setFillEnabled(true); RpmAnim.setFillAfter(true); ivNadel.setAnimation(RpmAnim); RpmAnim.start(); } //rotate Boostbalken currentBoostDegree=currentBoostDegree+BoostDegree; BoostDegree=(boost-lastBoost)*BOOSTtoDegree; //rotate Loadbalken currentLoadDegree=currentLoadDegree+LoadDegree; LoadDegree=(load-lastLoad)*LOADtoDegree; if(BoostDegree!=0) { BoostAnim=new RotateAnimation((float)-currentBoostDegree, (float)-BoostDegree, ivBoost.getWidth()/2, ivBoost.getHeight()/2); BoostAnim.setFillEnabled(true); BoostAnim.setFillAfter(true); ivBoost.setAnimation(BoostAnim); BoostAnim.start(); } if(LoadDegree!=0) { LoadAnim=new RotateAnimation((float)currentLoadDegree, (float)LoadDegree, ivLoad.getWidth()/2, ivLoad.getHeight()/2); LoadAnim.setFillEnabled(true); LoadAnim.setFillAfter(true); ivLoad.setAnimation(LoadAnim); LoadAnim.start(); } } i don't get it =( thx 4 help EDIT: part of the bluetooth Thread that calls the callback while (run) { try { bytes = mmInStream.read(buffer); if (connection.btCallback != null) { connection.btCallback.getData(buffer,bytes); } } catch (IOException e) { break; } the callback methode of the bluetooth thread: public void getData(byte[] bytes, int len) { setTacho(); }

    Read the article

  • Spork servers super slow (>3m) to start for RSpec & Cucumber BDD

    - by Eric M.
    I recently installed a fresh development setup on my laptop and now notice that my instances of spork take several minutes to start up. This is also most likely of the RSpec and Cucumber tests start up times running super slow. I ran in diagnostic mode with the -d flag and received the output below. Anyone have a clue why this is suddenly happening? Spork Diagnosis - -- Summary -- config/boot.rb config/environment.rb config/initializers/backtrace_silencers.rb config/initializers/devise.rb config/initializers/hoptoad.rb config/initializers/inflections.rb config/initializers/mime_types.rb config/initializers/new_rails_defaults.rb config/initializers/session_store.rb spec/spec_helper.rb -- Detail -- --- config/boot.rb --- config/environment.rb:7 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/environment.rb --- spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/backtrace_silencers.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/devise.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/hoptoad.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/inflections.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/mime_types.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/new_rails_defaults.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- config/initializers/session_store.rb --- /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:147:in load' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:622:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:in each' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:621:inload_application_initializers' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:176:in process' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:insend' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/rails-2.3.5/lib/initializer.rb:113:in run_without_spork' /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/lib/spork/app_framework/rails.rb:18:inrun' config/environment.rb:9 /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Users/Eric/.rvm/rubies/ruby-1.8.7-p249/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' spec/spec_helper.rb:9 /Users/Eric/.rvm/gems/ruby-1.8.7-p249@33n/gems/spork-0.8.2/bin/../lib/spork.rb:23:in `prefork' spec/spec_helper.rb:7 --- spec/spec_helper.rb ---

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Advantage Database Server: slow stored procedure performance.

    - by ie
    I have a question about a performance of stored procedures in the ADS. I created a simple database with the following structure: CREATE TABLE MainTable ( Id INTEGER PRIMARY KEY, Name VARCHAR(50), Value INTEGER ); CREATE UNIQUE INDEX MainTableName_UIX ON MainTable ( Name ); CREATE TABLE SubTable ( Id INTEGER PRIMARY KEY, MainId INTEGER, Name VARCHAR(50), Value INTEGER ); CREATE INDEX SubTableMainId_UIX ON SubTable ( MainId ); CREATE UNIQUE INDEX SubTableName_UIX ON SubTable ( Name ); CREATE PROCEDURE CreateItems ( MainName VARCHAR ( 20 ), SubName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER, MainId INTEGER OUTPUT, SubId INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @SubName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainName = (SELECT MainName FROM __input); @SubName = (SELECT SubName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, @MainName, @MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, @SubName, @MainId, @SubValue); INSERT INTO __output SELECT @MainId, @SubId FROM system.iota; END; CREATE PROCEDURE UpdateItems ( MainName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); UPDATE MainTable SET Value = @MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = @SubValue WHERE MainId = @MainId; END; CREATE PROCEDURE SelectItems ( MainName VARCHAR ( 20 ), CalculatedValue INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); @MainName = (SELECT MainName FROM __input); INSERT INTO __output SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = @MainName; END; CREATE PROCEDURE DeleteItems ( MainName VARCHAR ( 20 ) ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId; END; Actually, the problem I had - even so light stored procedures work very-very slow (about 50-150 ms) relatively to plain queries (0-5ms). To test the performance, I created a simple test (in F# using ADS ADO.NET provider): open System; open System.Data; open System.Diagnostics; open Advantage.Data.Provider; let mainName = "main name #"; let subName = "sub name #"; // INSERT let cmdTextScriptInsert = " DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, :MainName, :MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, :SubName, @MainId, :SubValue); SELECT @MainId, @SubId FROM system.iota;"; let cmdTextProcedureInsert = "CreateItems"; // UPDATE let cmdTextScriptUpdate = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); UPDATE MainTable SET Value = :MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = :SubValue WHERE MainId = @MainId;"; let cmdTextProcedureUpdate = "UpdateItems"; // SELECT let cmdTextScriptSelect = " SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = :MainName;"; let cmdTextProcedureSelect = "SelectItems"; // DELETE let cmdTextScriptDelete = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId;"; let cmdTextProcedureDelete = "DeleteItems"; let cnnStr = @"data source=D:\DB\test.add; ServerType=local; user id=adssys; password=***;"; let cnn = new AdsConnection(cnnStr); try cnn.Open(); let cmd = cnn.CreateCommand(); let parametrize ix prms = cmd.Parameters.Clear(); let addParam = function | "MainName" -> cmd.Parameters.Add(":MainName" , mainName + ix.ToString()) |> ignore; | "SubName" -> cmd.Parameters.Add(":SubName" , subName + ix.ToString() ) |> ignore; | "MainValue" -> cmd.Parameters.Add(":MainValue", ix * 3 ) |> ignore; | "SubValue" -> cmd.Parameters.Add(":SubValue" , ix * 7 ) |> ignore; | _ -> () prms |> List.iter addParam; let runTest testData = let (cmdType, cmdName, cmdText, cmdParams) = testData; let toPrefix cmdType cmdName = let prefix = match cmdType with | CommandType.StoredProcedure -> "Procedure-" | CommandType.Text -> "Script -" | _ -> "Unknown -" in prefix + cmdName; let stopWatch = new Stopwatch(); let runStep ix prms = parametrize ix prms; stopWatch.Start(); cmd.ExecuteNonQuery() |> ignore; stopWatch.Stop(); cmd.CommandText <- cmdText; cmd.CommandType <- cmdType; let startId = 1500; let count = 10; for id in startId .. startId+count do runStep id cmdParams; let elapsed = stopWatch.Elapsed; Console.WriteLine("Test '{0}' - total: {1}; per call: {2}ms", toPrefix cmdType cmdName, elapsed, Convert.ToInt32(elapsed.TotalMilliseconds)/count); let lst = [ (CommandType.Text, "Insert", cmdTextScriptInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.Text, "Update", cmdTextScriptUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.Text, "Select", cmdTextScriptSelect, ["MainName"]); (CommandType.Text, "Delete", cmdTextScriptDelete, ["MainName"]) (CommandType.StoredProcedure, "Insert", cmdTextProcedureInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Update", cmdTextProcedureUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Select", cmdTextProcedureSelect, ["MainName"]); (CommandType.StoredProcedure, "Delete", cmdTextProcedureDelete, ["MainName"])]; lst |> List.iter runTest; finally cnn.Close(); And I'm getting the following results: Test 'Script -Insert' - total: 00:00:00.0292841; per call: 2ms Test 'Script -Update' - total: 00:00:00.0056296; per call: 0ms Test 'Script -Select' - total: 00:00:00.0051738; per call: 0ms Test 'Script -Delete' - total: 00:00:00.0059258; per call: 0ms Test 'Procedure-Insert' - total: 00:00:01.2567146; per call: 125ms Test 'Procedure-Update' - total: 00:00:00.7442440; per call: 74ms Test 'Procedure-Select' - total: 00:00:00.5120446; per call: 51ms Test 'Procedure-Delete' - total: 00:00:01.0619165; per call: 106ms The situation with the remote server is much better, but still a great gap between plaqin queries and stored procedures: Test 'Script -Insert' - total: 00:00:00.0709299; per call: 7ms Test 'Script -Update' - total: 00:00:00.0161777; per call: 1ms Test 'Script -Select' - total: 00:00:00.0258113; per call: 2ms Test 'Script -Delete' - total: 00:00:00.0166242; per call: 1ms Test 'Procedure-Insert' - total: 00:00:00.5116138; per call: 51ms Test 'Procedure-Update' - total: 00:00:00.3802251; per call: 38ms Test 'Procedure-Select' - total: 00:00:00.1241245; per call: 12ms Test 'Procedure-Delete' - total: 00:00:00.4336334; per call: 43ms Is it any chance to improve the SP performance? Please advice. ADO.NET driver version - 9.10.2.9 Server version - 9.10.0.9 (ANSI - GERMAN, OEM - GERMAN) Thanks!

    Read the article

  • Why does my MacBook Pro trackpad sometimes set its tracking speed to slow, all by itself?

    - by Paul D. Waite
    Every now and then, the trackpad on my MacBook Pro will seem to set its own tracking speed to slow. I’ll notice that the cursor is moving slowly, and when I check in System Preferences, the tracking speed is indeed at slow, even though I never set it to slow myself. This might happen before/after switching into a VMWare virtual machine, but I’m not sure. It doesn’t seem to happen on startup or anything, just during use. Anyone else seen this?

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • why is LZMA SDK (7-zip) so slow

    - by Tono Nam
    I found 7-zip great and I will like to use it on .net applications. I have a 10MB file (a.001) and it takes: 2 seconds to encode. Now it will be nice if I could do the same thing on c#. I have downloaded http://www.7-zip.org/sdk.html LZMA SDK c# source code. I basically copied the CS directory into a console application in visual studio: Then I compiled and eveything compiled smoothly. So on the output directory I placed the file a.001 which is 10MB of size. On the main method that came on the source code I placed: [STAThread] static int Main(string[] args) { // e stands for encode args = "e a.001 output.7z".Split(' '); // added this line for debug try { return Main2(args); } catch (Exception e) { Console.WriteLine("{0} Caught exception #1.", e); // throw e; return 1; } } when I execute the console application the application works great and I get the output a.7z on the working directory. The problem is that it takes so long. It takes about 15 seconds to execute! I have also tried http://stackoverflow.com/a/8775927/637142 approach and it also takes very long. Why is it 10 times slower than the actual program ? Also Even if I set to use only one thread: It still takes much less time (3 seconds vs 15): (Edit) Another Possibility Could it be because C# is slower than assembly or C ? I notice that the algorithm does a lot of heavy operations. For example compare these two blocks of code. They both do the same thing: C void main() { time_t now; int i,j,k,x; long counter ; counter = 0; now = time(NULL); /* LOOP */ for(x=0; x<10; x++) { counter = -1234567890 + x+2; for (j = 0; j < 10000; j++) for(i = 0; i< 1000; i++) for(k =0; k<1000; k++) { if(counter > 10000) counter = counter - 9999; else counter= counter +1; } printf (" %d \n", time(NULL) - now); // display elapsed time } printf("counter = %d\n\n",counter); // display result of counter printf ("Elapsed time = %d seconds ", time(NULL) - now); gets("Wait"); } output c# static void Main(string[] args) { DateTime now; int i, j, k, x; long counter; counter = 0; now = DateTime.Now; /* LOOP */ for (x = 0; x < 10; x++) { counter = -1234567890 + x + 2; for (j = 0; j < 10000; j++) for (i = 0; i < 1000; i++) for (k = 0; k < 1000; k++) { if (counter > 10000) counter = counter - 9999; else counter = counter + 1; } Console.WriteLine((DateTime.Now - now).Seconds.ToString()); } Console.Write("counter = {0} \n", counter.ToString()); Console.Write("Elapsed time = {0} seconds", DateTime.Now - now); Console.Read(); } Output Note how much slower was c#. Both programs where run from outside visual studio on release mode. Maybe that is the reason why it takes so much longer in .net than on c++. Conclusion I cannot seem to know what is causing the problem. I guess I will use 7z.dll and invoke the necessary methods from c#. A library that does that is at: http://sevenzipsharp.codeplex.com/ and that way I am using the same library that 7zip is using as: // dont forget to add reference to SevenZipSharp located on the link I provided static void Main(string[] args) { // load the dll SevenZip.SevenZipCompressor.SetLibraryPath(@"C:\Program Files (x86)\7-Zip\7z.dll"); SevenZip.SevenZipCompressor compress = new SevenZip.SevenZipCompressor(); compress.CompressDirectory("MyFolderToArchive", "output.7z"); }

    Read the article

  • Visual studio 2010 remote debugging is very slow (across domains, over vpn)

    - by alex
    Overall debugging works, but each step through code takes dosens of seconds. I've already closed all additional windows like stack trace,watches,autos; deleted all breakpoints. server and dev machine are located in different domains, so i set up local user on both, with matching password. remote debugger is running as service. looking at security log, I found quite a lot of entries about remote debugging account logging in (record about every minute). Any suggestions on how i can speed up remote debugging? dev computer: quad core, 8 Gb mem, win 7 x64 , visual studio 2010 ultimate target server: asp.net website , 2xdual core xeon, 2Gb mem, remote debugger 2010 communication channel: vpn , 5 mbit , latency about 20ms. (seems that debbugging never uses more than 20 kb/s)

    Read the article

  • TeamCity Perforce checkout is ridiculously slow

    - by Ed Woodcock
    Hi folks: Using TeamCity with Perforce on the build server I'm setting up at work: It takes about 2 hours to check out the workspace each time I try to build. Does anyone have any idea WHY this would be the case, when it takes about two minutes to check out the full workspace from within P4V? Cheers, Ed

    Read the article

  • Slow Update/insert into SQL Server CE using LinqToDatasets

    - by Vaccano
    I have a mobile app that is using LinqToDatasets to update/insert into a SQL Server CE 3.5 File. My Code looks like this: // All the MyClass Updates MyTableAdapter myTableAdapter = new MyTableAdapter(); foreach (MyClassToInsert myClass in updates.MyClassChanges) { // Update the row if it is already there int result = myTableAdapter.Update(myClass.FirstColumn, myClass.SecondColumn, myClass.FirstColumn); // If the row was not there then insert it. if (result == 0) { myTableAdapter.Insert(myClass.FirstColumn, myClass.SecondColumn); } } This code is used to keep the hand held database in sync with the server database. Problem is if it is a full update (first time for example) there are a lot of updates (about 125). That makes this code (and more loops like it take a very long time (I have three such loops that take over 30 seconds each). Is there a faster or better way to do updates/inserts like this? (I did see this Codeplex Project, but I could not see how to make it work with both updates and inserts.)

    Read the article

  • UIWebView loading slow if using baseUrl

    - by ashish
    this works very well and fast but it does not load images if I give baseurl instead of nil. It takes few seconds to load. NSURL *myUrl = [[NSURL alloc] initFileURLWithPath:self.contentPath]; //NSLog(@”Content url is %@”,myUrl); NSString *string = [[NSString alloc]initWithContentsOfFile:self.contentPath encoding:NSASCIIStringEncoding error:nil]; [contentDisplay loadHTMLString:string baseURL:nil]; //here nil if replaced by myUrl webView takes time to load. Any help ? [string release]; [myUrl release];

    Read the article

  • Weblogic is slow to start (11mins) under VM (VirtualBox and VMware)

    - by Vladimir Dyuzhev
    (SOLVED! BY FAKING SYSTEM RANDOM GENERATOR, SEE BELOW) I'm setting up a VM image for my dev/build team. Inside that VM a Weblogic domain should be running. I use Ububtu server distro, WLS 9.2MP3 + ALSB. Everything works OK, quite fast, but at the start time the WLS stops twice for a measurable amounts of time. Two stops in total amount to about 10 minutes delay. For tasks where deployment requires server restart it's very annoying. :-( Sleeping time is not constant, sometimes the server starts very fast, sometimes so-so, sometimes 10 minutes or more. Interesting that if I press Enter while looking at the stopped server, it wakes up much faster, sometimes after a few seconds. WLST (Weblogic Jython shell) is also hanging for quite a time when executed in VM. It doesn't react to Enter though. Here must be some developers who run WLS with a VM. I wonder if others have the same problem? Was someone able to solve it? Here's the server output (just for a case): Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04) Java HotSpot(TM) Client VM (build 1.5.0_12-b04, mixed mode) Starting WLS with line: /shared2/beahome/jdk150_12/bin/java -client -Xmx256m -XX:MaxPermSize=128m -Xverify:none -da -Dplatform.home=/shared2/beahome/weblogic92 -Dwls.home=/shared2/beahome/weblogic92/server -Dwli.home=/shared2/beahome/weblogic92/integration -Dweblogic.management.discover=true -Dwl w.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=/shared2/beahome/patch_weblogic923/profiles/default/sysext_ manifest_classpath -Dweblogic.management.username=admin -Dweblogic.management.password=wlsadmin -Dweblogic.Name=LOGMGR-admin -Djava.security .policy=/shared2/beahome/weblogic92/server/lib/weblogic.policy weblogic.Server <1-Apr-2010 12:47:22 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000395> <Following extensions directory contents added to the end of the classpath: /shared2/beahome/weblogic92/platform/lib/p13n/p13n-schemas.jar:/shared2/beahome/weblogic92/platform/lib/p13n/p13n_common.jar:/shared2/beahom e/weblogic92/platform/lib/p13n/p13n_system.jar:/shared2/beahome/weblogic92/platform/lib/wlp/netuix_common.jar:/shared2/beahome/weblogic92/pl atform/lib/wlp/netuix_schemas.jar:/shared2/beahome/weblogic92/platform/lib/wlp/netuix_system.jar:/shared2/beahome/weblogic92/platform/lib/wl p/wsrp-common.jar> <1-Apr-2010 12:47:22 o'clock PM GMT-05:00> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) Client VM Ve rsion 1.5.0_12-b04 from Sun Microsystems Inc.> <1-Apr-2010 12:47:23 o'clock PM GMT-05:00> <Info> <Management> <BEA-141107> <Version: WebLogic Server 9.2 MP3 Mon Mar 10 08:28:41 EDT 2008 1096261 > <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Info> <WebLogicServer> <BEA-000215> <Loaded License : /shared2/beahome/license.bea> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool> <1-Apr-2010 12:47:25 o'clock PM GMT-05:00> <Notice> <Log Management> <BEA-170019> <The server log file /shared2/wldomains/beaadmd/LOGMGR/ser vers/LOGMGR-admin/logs/LOGMGR-admin.log is opened. All server side log events will be written to this file.> Here we have the first delay, up to 5 mins... <1-Apr-2010 12:53:21 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090082> <Security initializing using security realm myrealm.> <1-Apr-2010 12:53:24 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY> <1-Apr-2010 12:53:24 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <Log Management> <BEA-170027> <The server initialized the domain log broadcaster success fully. Log messages will now be broadcasted to the domain log.> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN> <1-Apr-2010 12:53:25 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING> <1-Apr-2010 12:53:28 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090171> <Loading the identity certificate and private key stored under t he alias adminuialias from the jks keystore file /shared2/wldomains/beaadmd/LOGMGR/CustomIdentity.jks.> And here is the second, again up to 5 mins. <1-Apr-2010 12:58:56 o'clock PM GMT-05:00> <Notice> <Security> <BEA-090169> <Loading trusted certificates from the jks keystore file /shared 2/wldomains/beaadmd/LOGMGR/CustomTrust.jks.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <Server> <BEA-002613> <Channel "DefaultSecure" is now listening on 192.168.56.102:7002 f or protocols iiops, t3s, ldaps, https.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <Server> <BEA-002613> <Channel "Default" is now listening on 192.168.56.102:8012 for pro tocols iiop, t3, ldap, http.> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000331> <Started WebLogic Admin Server "LOGMGR-admin" for domain " LOGMGR" running in Development Mode> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING> <1-Apr-2010 12:58:57 o'clock PM GMT-05:00> <Notice> <WebLogicServer> <BEA-000360> <Server started in RUNNING mode> UPDATE I think I've got the track: it must be the randon seed initialization. That may explain why generating keyboard events release the server. I've made the thread dump, and one thread is in runnable state, but waiting: "[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=1 tid=0x0a7b06e8 nid=0xeda runnable [0x728a500 0..0x728a6d80] at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:185) at sun.security.provider.NativePRNG$RandomIO.implGenerateSeed(NativePRNG.java:202) - locked <0x7d928c78> (a java.lang.Object) at sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:108) at sun.security.provider.NativePRNG.engineGenerateSeed(NativePRNG.java:102) at java.security.SecureRandom.generateSeed(SecureRandom.java:475) at weblogic.security.AbstractRandomData.ensureInittedAndSeeded(AbstractRandomData.java:83) SOLVED Weblogic uses SecureRandom to init security subsystem. SecureRandom by default uses /dev/urandom file. For some reason, reading this file under VM comes to halt quite often. Generating console events helps to create more randomness, and release the WLS. For the test purposes I have changed jre/lib/security/java.security file, property to securerandom.source=file:/tmp/big.random.file. Weblogic now starts in 15 seconds.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >