Search Results

Search found 28830 results on 1154 pages for 'multi table'.

Page 3/1154 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to get table cells evenly spaced?

    - by DaveDev
    I'm trying to create a page with a number of static html tables on them. What do I need to do to get them to display each column the same size as each other column in the table? The HTML is as follows: <span class="Emphasis">Interest rates</span><br /> <table cellpadding="0px" cellspacing="0px" class="PerformanceTable"> <tr><th class="TableHeader"></th><th class="TableHeader">Current rate as at 31 March 2010</th><th class="TableHeader">31 December 2009</th><th class="TableHeader">31 March 2009</th></tr> <tr class="TableRow"><td>Index1</td><td class="PerformanceCell">1.00%</td><td>1.00%</td><td>1.50%</td></tr> <tr class="TableRow"><td>index2</td><td class="PerformanceCell">0.50%</td><td>0.50%</td><td>0.50%</td></tr> <tr class="TableRow"><td>index3</td><td class="PerformanceCell">0.25%</td><td>0.25%</td><td>0.25%</td></tr> </table> <span>Source: Bt</span><br /><br /> <span class="Emphasis">Stock markets</span><br /> <table cellpadding="0px" cellspacing="0px" class="PerformanceTable"> <tr><th class="TableHeader"></th><th class="TableHeader">As at 31 March 2010</th><th class="TableHeader">1 month change</th><th class="TableHeader">QTD change</th><th class="TableHeader">12 months change</th></tr> <tr class="TableRow"><td>index1</td><td class="PerformanceCell">1169.43</td><td class="PerformanceCell">5.88%</td><td class="PerformanceCell">4.87%</td><td class="PerformanceCell">46.57%</td></tr> <tr class="TableRow"><td>index2</td><td class="PerformanceCell">1958.34</td><td class="PerformanceCell">7.68%</td><td class="PerformanceCell">5.27%</td><td class="PerformanceCell">58.31%</td></tr> <tr class="TableRow"><td>index3</td><td class="PerformanceCell">5679.64</td><td class="PerformanceCell">6.07%</td><td class="PerformanceCell">4.93%</td><td class="PerformanceCell">44.66%</td></tr> <tr class="TableRow"><td>index4</td><td class="PerformanceCell">2943.92</td><td class="PerformanceCell">8.30%</td><td class="PerformanceCell">-0.98%</td><td class="PerformanceCell">44.52%</td></tr> <tr class="TableRow"><td>index5</td><td class="PerformanceCell">978.81</td><td class="PerformanceCell">9.47%</td><td class="PerformanceCell">7.85%</td><td class="PerformanceCell">26.52%</td></tr> <tr class="TableRow"><td>index6</td><td class="PerformanceCell">3177.77</td><td class="PerformanceCell">10.58%</td><td class="PerformanceCell">6.82%</td><td class="PerformanceCell">44.84%</td></tr> </table> <span>Source: B</span><br /><br /> I'm also open to suggestion on how to tidy this up, if there are any? :-)

    Read the article

  • The Best Ways to Lock Down Your Multi-User Computer

    - by Lori Kaufman
    Whether you’re sharing a computer with other family members or friends at home, or securing computers in a corporate environment, there may be many reasons why you need to protect the programs, data, and settings on the computers. This article presents multiple ways of locking down a Windows 7 computer, depending on the type of usage being employed by the users. You may need to use a combination of several of the following methods to protect your programs, data, and settings. How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • Is it possible to get dragging work on a Macbook multi-touch touch pad

    - by lhahne
    I have a Macbook 5,1. That is to say that it is the only 13 inch aluminium Macbook as the later revisions were renamed Macbook Pro. Two-finger scrolling seems to work fine but dragging doesn't work. In OsX this works so that you point an object, click and keep your finger pressed on the touch pad while slide another finger to move the cursor. This causes weird and undefined behavior in Ubuntu as it seems the driver doesn't recognize this as dragging. Any ideas?

    Read the article

  • Configuring Multi-Tap on Synaptics Touchpad

    - by nunos
    I am having a hard time configuring my notebook's touchpad. The touchpad already works. It successfully responds to one-finger tap, two-finger tap and two-finger vertical scrolling. What I want to accomplish: change two-finger tap action from right-mouse click to middle-mouse click add three-finger tap functionality to yield right-mouse click action (i have checked that the three-finger tap is supported by my laptop's touchpad since it works on Windows) I read on a forum to use this as a guide. I have successfully accomplished point 1 with synclient TapButton2=2. However, I have to do it everytime I log in. I have tried to put that command on /etc/rc.local but the computer always boots and logins with the default configuration. Regarding point 2, I have tried synclient TapButton3=3 but it doesn't do anything when I three-finger tap the touchpad. I am running Ubuntu 11.10 on an Asus N82JV. /etc/X11/xorg.conf: nuno@mozart:~$ cat /etc/X11/xorg.conf Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection /usr/share/X11/xorg.conf.d/50-synaptics.conf: nuno@mozart:~$ cat /usr/share/X11/xorg.conf.d/50-synaptics.conf # Example xorg.conf.d snippet that assigns the touchpad driver # to all touchpads. See xorg.conf.d(5) for more information on # InputClass. # DO NOT EDIT THIS FILE, your distribution will likely overwrite # it when updating. Copy (and rename) this file into # /etc/X11/xorg.conf.d first. # Additional options may be added in the form of # Option "OptionName" "value" # Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection xinput list: nuno@mozart:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=12 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=13 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=16 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? USB2.0 2.0M UVC WebCam id=10 [slave keyboard (3)] ? Microsoft Microsoft® Nano Transceiver v2.0 id=11 [slave keyboard (3)] ? Asus Laptop extra buttons id=14 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=15 [slave keyboard (3)]

    Read the article

  • Configuring Multi-Tap on Synpaptics Touchpad

    - by nunos
    I am having a hard time configuring my notebook's touchpad. The touchpad already works. It succesfully responds to one-finger tap, two-finger tap and two-finger vertical scrolling. What I want to accomplish: change two-finger tap action from right-mouse click to middle-mouse click add three-finger tap functionality to yield right-mouse click action I read on a forum to use this as a guide. I have succesfully accomplished point 1 with synclient TapButton2=2. However, I have to do it everytime I log in. I have tried to put that command on /etc/rc.local but the computer always boots and logins with the default configuration. Regarding point 2, I have tried synclient TapButton3=3 but it doesn't do anything when I three-finger tap the touchpad. I am running Ubuntu 11.10 on an Asus N82JV. /etc/X11/xorg.conf: nuno@mozart:~$ cat /etc/X11/xorg.conf Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection /usr/share/X11/xorg.conf.d/50-synaptics.conf: nuno@mozart:~$ cat /usr/share/X11/xorg.conf.d/50-synaptics.conf # Example xorg.conf.d snippet that assigns the touchpad driver # to all touchpads. See xorg.conf.d(5) for more information on # InputClass. # DO NOT EDIT THIS FILE, your distribution will likely overwrite # it when updating. Copy (and rename) this file into # /etc/X11/xorg.conf.d first. # Additional options may be added in the form of # Option "OptionName" "value" # Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection xinput list: nuno@mozart:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=12 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=13 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=16 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? USB2.0 2.0M UVC WebCam id=10 [slave keyboard (3)] ? Microsoft Microsoft® Nano Transceiver v2.0 id=11 [slave keyboard (3)] ? Asus Laptop extra buttons id=14 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=15 [slave keyboard (3)]

    Read the article

  • Configuring Touchpad Multi-Tap on Ubuntu 11.10

    - by nunos
    I am having a hard time configuring my notebook's touchpad. I can do everything I can in Windows with the exception of three-finger tap, which doesn't work, and the action of two-finger tap which is giving me the equivalent to a right-mouse click, when I wanted a middle-mouse click. I read on a forum to use this as a guide. The problem is that I can't even find the configuration file /etc/X11/xorg.conf.d/10-synaptics.conf. I tried running pacman -S xf86-input-synaptics but I don't have the pacman program installed. When I try to install it by sudo apt-get install I get a pacman game instead! I know the guide is for archlinux, so maybe that's why it doesn't work with me. I am running Ubuntu 11.10 on an Asus N82JV. Any help on this is appreciated. Here's the output of xinput list: nuno@mozart:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=12 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=13 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=16 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? USB2.0 2.0M UVC WebCam id=10 [slave keyboard (3)] ? Microsoft Microsoft® Nano Transceiver v2.0 id=11 [slave keyboard (3)] ? Asus Laptop extra buttons id=14 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=15 [slave keyboard (3)]

    Read the article

  • Enabling Multi-touch features of the Apple Magic Mouse on Ubuntu 12.04

    - by Martin
    I want to write a simple app that uses Apple's Magic Trackpad, nothing special, just so that it recognizes atleast one gesture. The thing is, Ubuntu itself doesnt really recognize this device. I'm using Ubuntu 12.04 and by default the device works with 1 finger, but without tap-click or doubletap, 3 fingers move the window and 3 finger spread makes it fullscreen. I managed to enable 2 finger scrolling with "xinput set-prop 8 'Two-Finger Scrolling' 1 1", but thats about it. No other gestures work, ginn doesnt start, giesview detects the device but doesnt respond to any of the gestures, and touchegg doesnt start either. I tried example apps from qt that come with ubuntu but they dont work. So... what do i do? i tried using qt but all i get from the app is "Got touch without getting TouchBegin for id XX" what else can i use to get my app to work with multitouch devices?

    Read the article

  • CSS Table Formatting to a HTML Table

    - by Rurigok
    I am attempting to provide CSS formating to two HTML tables, but I cannot. I am setting up a webpage in HTML & CSS (with the CSS in an external sheet) and the layout of the website depends on the tables. There are 2 tables, one for the head and another for the body. They are set up whereas content is situated in one middle column of 60% width, with one column on each side of the center with 20% width each, along with other table formatting. My question is - how can I format the tables in CSS? I successfully formatted them in HTML, but this will not do. This is the CSS code for the tables - each table has the id layouttable: #layouttable{border:0px;width:100%;} #layouttable td{width:20%;vertical-align:top;} #layouttable td{width:60%;vertical-align:top;background-color:#E8E8E8;} #layouttable td{width:20%;vertical-align:top;} The tables in the html document both each have, in respective order, these elements (with content inside not shown): <table id="layouttable"><tr><td></td><td></td><td></td></tr></table> Does anyone have any idea why this CSS is not working, or can write some code to fix it? If further explanation is needed, please, ask.

    Read the article

  • CSS Zebra Stripe a Specific Table tr:nth-child(even)

    - by BillR
    I want to zebra stripe only select tables using. I do not want to use jQuery for this. tbody tr:nth-child(even) td, tbody tr.even td {background:#e5ecf9;} When I put that in a css file it affects all tables on all pages that call the same stylesheet. What I would like to do is selectively apply it to specific tables. I have tried this, but it doesn't work. // in stylesheet .zebra_stripe{ tbody tr:nth-child(even) td, tbody tr.even td {background:#e5ecf9;} } // in html <table class="zebra_even"> <colgroup> <col class="width_10em" /> <col class="width_15em" /> </colgroup> <tr> <td>Odd row nice and clear.</td> <td>Some Stuff</td> </tr> <tr> <td>Even row nice and clear but it should be shaded.</td> <td>Some Stuff</td> </tr> </table> And this: <table> <colgroup> <col class="width_10em" /> <col class="width_15em" /> </colgroup> <tbody class="zebra_even"> The stylesheet works as it is properly formatting other elements of the html. Can someone help me with an answer to this problem? Thanks.

    Read the article

  • add table row before or after a table row of known ID

    - by Perpetualcoder
    In a table like this: <table> <!-- Insert Row of bun here --> <tr id="meat"> <td>Hamburger</td> </tr/> <!-- Insert Row of bun here --> </table> function AddBefore(rowId){} function AddAfter(rowId){} I need to create methods without using jquery..i am familiar with append after and append before in jquery.. but i am stuck with using palin js.

    Read the article

  • problem in below table:i had table inside table .my inner table contains some text.

    - by Ayyappan.Anbalagan
    Heading ## <tr style=" width:500px; float:left;"> <td style="border: thin ridge #008000; text-align:left;" align="left"; > <table class="" style=" border: 1px solid #800000; width:200px; float:left; height: 200px;"> <tr> <td>&nbsp;stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow&nbsp; </td> </tr> </table> stackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow stackoverflowstackoverflow statackoverflow sta</td> </tr> </table>

    Read the article

  • jquery: remove table row while iterating through table rows

    - by deostroll
    #exceptions is a html table. I try to run the code below, but it doesn't remove the table row. $('#exceptions').find('tr').each(function(){ var flag=false; var val = 'excalibur'; $(this).find('td').each(function(){ if($(this).text().toLowerCase() == val) flag = true; }); if(flag) $(this).parent().remove($(this)); }); What is the correct way to do it?

    Read the article

  • Update (ajax) only part of table without affecting whole table

    - by ile
    <table width="100%" border="0" cellspacing="0" cellpadding="0" class="list"> <tr> <th><a href="#" class="sortby">Full Name</a></th> <th><a href="#" class="sortby">City</a></th> <th><a href="#" class="sortby">Country</a></th> <th><a href="#" class="sortby">Status</a></th> <th><a href="#" class="sortby">Education</a></th> <th><a href="#" class="sortby">Tasks</a></th> </tr> <div class="dynamicData"> <tr> <td>Firstname Lastname</a></td> <td>Los Angeles</td> <td>USA</td> <td>Married</td> <td>High School</td> <td>4</td> </tr> </tr> <tr> <td>Firstname Lastname</a></td> <td>Los Angeles</td> <td>USA</td> <td>Married</td> <td>High School</td> <td>4</td> </tr> </div> </table> The idea is to update table rows when clicking on link with clasl "sortby" which is part of th table tag. I tried inserting div but this doesn't work. Separating this in two tables is not good solution because witdh in both tables cell are not following each other. Any other solution? Thanks

    Read the article

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Display a JSON-string as a table

    - by Martin Aleksander
    I'm totally new to JSON, and have a json-string I need to display as a user-friendly table. I have this file, http://ish.tek.no/json_top_content.php?project_id=11&period=week, witch is showing ID-numbers for products (title) and the number of views. The Title-ID should be connected to this file; http://api.prisguide.no/export/product.php?id=158200 so I can get a table like this: ID | Product Name | Views 158200 | Samsung Galaxy SIII | 21049 How can I do this?

    Read the article

  • MySQL table does not exist

    - by Phanindra
    I am getting following error in err file. 110803 6:51:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` already exists in InnoDB internal InnoDB: data dictionary. Have you deleted the .frm file InnoDB: and not used DROP TABLE? Have you used DROP DATABASE InnoDB: for InnoDB tables in MySQL version <= 3.23.43? InnoDB: See the Restrictions section of the InnoDB manual. InnoDB: You can drop the orphaned table inside InnoDB by InnoDB: creating an InnoDB table with the same name in another InnoDB: database and copying the .frm file to the current database. InnoDB: Then MySQL thinks the table exists, and DROP TABLE will InnoDB: succeed. InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html And when I do the same, like copying the frm file from other database to here and drop the table, i am getting following error, InnoDB: Error: trying to load index PRIMARY for table ims/temp_discoveryjobdetails InnoDB: but the index tree has been freed! 110803 6:50:26 InnoDB: Error: table `ims`.`temp_discoveryjobdetails` does not exist in the InnoDB internal InnoDB: data dictionary though MySQL is trying to drop it. InnoDB: Have you copied the .frm file of the table to the InnoDB: MySQL database directory from another database? InnoDB: You can look for further help from InnoDB: http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html Please any one help me out of this. Also can any one tell me why this error is coming. EDIT: The issue is occurring only when disk size is full and when we use Truncate table. Also this is occurring only in 5.1 version but not in 5.0 version.

    Read the article

  • Convert table to table with autofilter/order by function [on hold]

    - by evachristine
    How can I make any normal HTML table: <table border=1 style='border:2px solid black;border-collapse:collapse;'><tr><td>foo1</td><td>foo2</td><td>foo3</td><td>foo3</td><td>foo4</td><td>foo5</td><td>foo6</td></tr> <tr><td><a href="https://foo.com/adsf">adsf</a></td><td>ksjdajsfljdsaljfxycaqrf</td><td><a href="mailto:[email protected]?Subject=adsf - ksjdajsfljdsaljfxycaqrf">[email protected]</a></td><td>nmasdfdsadfafd</td><td>INPROG</td><td>3</td><td>2014-03-04 10:37</td> <tr><td><a href="https://foo.com/adsflkjsadlf">adsflkjsadlf</a></td><td>alksjdlsadjfyxcvyx</td><td><a href="mailto:[email protected]?Subject=adsflkjsadlf - alksjdlsadjfyxcvyx">[email protected]</a></td><td>nmasdfdsadfafd</td><td>INPROG</td><td>3</td><td>2014-04-24 00:00</td> <tr><td><a href="https://foo.com/asdfasdfsadf">asdfasdfsadf</a></td><td>jdsalajslkfjyxcgrearafs</td><td><a href="mailto:[email protected]?Subject=asdfasdfsadf - jdsalajslkfjyxcgrearafs">[email protected]</a></td><td>nmasdfdsadfafd</td><td>INPROG</td><td>3</td><td>2014-04-24 00:00</td> </table> to a table what's first row (ex.: foo1; foo2; foo3, etc..) is clickable in a way that it will make the columns in order, ex.: order by foo2, etc. Just like an order by in an XLS. (extra: how in the hell can I put autofilter too?:D )

    Read the article

  • Multi-statement Table Valued Function vs Inline Table Valued Function

    - by AndyC
    ie: CREATE FUNCTION MyNS.GetUnshippedOrders() RETURNS TABLE AS RETURN SELECT a.SaleId, a.CustomerID, b.Qty FROM Sales.Sales a INNER JOIN Sales.SaleDetail b ON a.SaleId = b.SaleId INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.ShipDate IS NULL GO versus: CREATE FUNCTION MyNS.GetLastShipped(@CustomerID INT) RETURNS @CustomerOrder TABLE (SaleOrderID INT NOT NULL, CustomerID INT NOT NULL, OrderDate DATETIME NOT NULL, OrderQty INT NOT NULL) AS BEGIN DECLARE @MaxDate DATETIME SELECT @MaxDate = MAX(OrderDate) FROM Sales.SalesOrderHeader WHERE CustomerID = @CustomerID INSERT @CustomerOrder SELECT a.SalesOrderID, a.CustomerID, a.OrderDate, b.OrderQty FROM Sales.SalesOrderHeader a INNER JOIN Sales.SalesOrderHeader b ON a.SalesOrderID = b.SalesOrderID INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.OrderDate = @MaxDate AND a.CustomerID = @CustomerID RETURN END GO Is there an advantage to using one over the other? Is there certain scenarios when one is better than the other or are the differences purely syntactical? I realise the 2 example queries are doing different things but is there a reason I would write them in that way? Reading about them and the advantages/differences haven't really been explained. Thanks

    Read the article

  • dell studio 1558 multi touchpad multi finger problem

    - by christian130
    Recently I bought an Intel Core i7 Dell laptop fully integrated. I have an ATi Radeon 1 gb ram graphic card and so on. Everything works fine even the brightness, but when I try the multi touch multi finger(touchpad which enables two or more finger) the pointer jump and go there and go here and so It's very annoying because when I'm trying to write this question I cannot let the pointer be cool! when I accidentally the touchpad it is very annoying. How can I fix this ?

    Read the article

  • Multiple foreign keys in one table to 1 other table in mysql

    - by djerry
    Hey guys, I got 2 tables in my database: user and call. User exists of 3 fields: id, name, number and call : id, 'source', 'destination', 'referred', date. I need to monitor calls in my app. The 3 ' ' fields above are actually userid numbers. now i'm wondering, can i make those 3 field foreign key elements of the id-field in table user? Thanks in advance...

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Multi-touch capabilities for Ubuntu 12.04? How can I configure the system to support it?

    - by dd dong
    I installed Ubuntu 13.10, it supports default multi-touch perfectly! I need Ubuntu 12.04, however, it does not appear to support multi-touch features. I write a touch test program, it supports touch and mouseMove at the same time, but I just need touch function. I know we can set the device to grab bits. But how can I configure the system to support it? It gives me an error when I try to use multi-touch.

    Read the article

  • multiple pivot table consolidation to another pivot table

    - by phill
    I have to SQL Server views being drawn to 2 seperate worksheets as pivot tables in an excel 2007 file. the results on worksheet1 include example data: - company_name, tickets, month, year company1, 3, 1,2009 company2, 4, 1,2009 company3, 5, 1,2009 company3, 2, 2,2009 results from worksheet2 include example data: company_name, month, year , fee company1, 1 , 2009 , 2.00 company2, 1 , 2009 , 3.00 company3, 1 , 2009 , 4.00 company3, 2 , 2009 , 2.00 I would like the results of one worksheet to be reflected onto the pivot table of another with their corresponding companies. for example in this case: - company_name, tickets, month, year, fee company1, 3, 1,2009 , 2 company2, 4, 1,2009 , 3 company3, 5, 1,2009 , 4 company3, 2, 2,2009 , 2 Is there a way to do this without vba? thanks in advance

    Read the article

  • boot.ini Issue - Multi-boot System, Linux, XP and XP64 - Missing File in system32 Message

    - by nicorellius
    I have an interesting issue that has me stumped. Not that I'm a computer whiz or anything. I have a multi-boot system with two hard drives: one drive has CentOS and Windows XP 64-bit and the other drive has Windows XP 32-bit. CentOS grub boot loader works great, and I have it set to default to Windows. But this is the problem. My boot.ini file seems to be in order, yet it still gives an error if I choose the default OS (which, consequently, is XP32): Windows could not start because the following file is missing or corrput: (Windows root) \system32\ntoskrnl.exe. Please re-install a copy of the above file. But if I choose the actual boot ID, i.e., toggle to the Windows XP Pro selection it boots just fine. In the boot.ini file, the entry for XP 32 is the samee: [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows XP Pro" /noexecute=optin /fastdetect /usepmtimer [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows XP Pro" /noexecute=optin /fastdetect /usepmtimer multi(0)disk(0)rdisk(1)partition(2)\WINDOWS="Windows XP Pro x64" /noexecute=optin /fastdetect /usepmtimer What am I missing?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >