Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 354/1624 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Making TMUX use Alt+Num to select window

    - by Oxinabox
    I have been messing with TMUX, and I am liking the configuration abilities. The window list at the bottom makes me think that the same shortcut for changing windows in Irssi, should work in TMUX, but it doesn't. So at the moment, I have to press C-b then a number to get that window open. I am happy having C-b for my normal prefix, (eg for C-b ? for help, C-b : command entry) But it would be cool if I could use both C-b+Numkey and Alt+NumKey for changing tabs. It would be even cooler if it could detect if a window is showing Irssi, and then ignore the Alt+NumKey, so that I can still change between Irssi windows.

    Read the article

  • Can't include Javascript variable in PHP mysql_query call? [on hold]

    - by user198895
    I want the PHP mysql_query call to retrieve user values based on the Agency drop-down value but I can't get this to work. Am I unable to include the Javascript variable agency.value in PHP? <script type="text/javascript"> var agency = document.getElementById("agency"); var user = document.getElementById("user"); agency.onchange = onchange; // change options when agency is changed function onchange() { <?php include 'dbConnect.php'; ?> <?php $q = mysql_query("select id as UserID, CONCAT(LastName, ', ' , FirstName) as UserName from users where Agency = " . ?>agency.value<?php . " order by UserName");?> option_html = "<option value=0 selected>- All Users -</option>"; <?php while ($row1 = mysql_fetch_array($q)) {?> if (agency.value == 0 || agency.value == '<?php echo $row1[AgencyID]; ?>') { option_html += "<option value=<?php echo $row1[UserID]; ?>><?php echo $row1[UserName]; ?></option>"; } <?php } ?> user.innerHTML = option_html; } </script>

    Read the article

  • Automatically select last row in a set in Excel

    - by Luke
    In Excel 2003, I am trying to keep track of some petty cash, and have it set up with the denomination along the top row, along with a sub total and difference column. I want a small section that shows how many rolls of coins I should have, by taking the total amount, and dividing it by however many should be in a roll, and rounding to the lowest whole number. That part is fine. What I want done is for that ONE section (how many rolls) I should have, based on the last row that has information in it. For example, if the last row is row 13, it should read the data from B13, C13, D13, etc. I don't mind learning Macros, if that's what the solution requires. I don't want to be manually selecting the last row each time though, I just want the worksheet to know automatically.

    Read the article

  • Windows 8.1 Search does not automatically select first search match

    - by Miguel Sevilla
    When I search in Windows 8/8.1 (start menu-start typing), it doesn't automatically highlight the search term. For example, if I'm trying to open the "Internet Options" panel and type the entire thing out in search, I have to down arrow or tab to the "Internet Options" search result. This is retarded. I'm used to Windows 7 style search where the first match is highlighted and i can easily just hit return. First match highlighting does work for other built-in things like "Control Panel", but it should work for all things in general, as it did in Windows 7 search. Anyways, if there is an option to enable this in Windows 8/8.1, I'd appreciate the tip. Thanks!

    Read the article

  • HIGH CPU USAGE + low memory usage

    - by hadi
    as you can see in below , there are high cpu usage by httpd request. please help me to decrease them. thanks. 28577 apache 15 0 99676 53m 3488 S 21 0.2 1:13.67 httpd 28568 apache 15 0 99676 53m 3496 S 19 0.2 1:14.92 httpd 28608 apache 15 0 99676 53m 3428 R 19 0.2 0:28.28 httpd 28615 apache 15 0 99676 53m 3436 R 19 0.2 0:25.33 httpd 28616 apache 15 0 99676 53m 3440 S 19 0.2 0:25.83 httpd 28619 apache 15 0 99676 53m 3436 R 19 0.2 0:26.12 httpd 28635 apache 15 0 97.9m 54m 3416 S 19 0.2 0:24.86 httpd 28558 apache 15 0 97.9m 54m 3432 R 17 0.2 1:40.75 httpd 28560 apache 15 0 97.9m 54m 3496 R 17 0.2 1:40.02 httpd 28621 apache 15 0 97.9m 54m 3420 S 17 0.2 0:25.61 httpd 28641 apache 16 0 97.9m 54m 3428 R 17 0.2 0:21.52 httpd 28642 apache 15 0 99756 53m 3424 R 15 0.2 0:21.46 httpd 28643 apache 15 0 99676 53m 3424 S 15 0.2 0:21.59 httpd 28594 apache 15 0 99756 53m 3428 R 13 0.2 0:44.41 httpd 28618 apache 15 0 99676 53m 3420 S 13 0.2 0:26.15 httpd 28654 apache 15 0 99676 53m 3472 S 13 0.2 0:04.27 httpd 28575 apache 15 0 99756 53m 3436 R 11 0.2 1:14.02 httpd 28576 apache 15 0 99676 53m 3496 S 11 0.2 1:16.79 httpd 28634 apache 15 0 99676 53m 3436 S 11 0.2 0:25.36 httpd 28653 apache 15 0 99676 53m 3424 S 11 0.2 0:04.35 httpd 28574 apache 15 0 99676 53m 3440 S 10 0.2 1:13.05 httpd 28592 apache 15 0 99676 53m 3492 R 10 0.2 0:45.78 httpd 28595 apache 15 0 99676 53m 3432 R 10 0.2 0:47.02 httpd 28617 apache 16 0 99676 53m 3436 S 10 0.2 0:25.32 httpd 28620 apache 15 0 99676 53m 3432 S 10 0.2 0:25.35 httpd 28597 apache 15 0 99676 53m 3428 S 8 0.2 0:43.56 httpd 11345 mysql 15 0 2927m 198m 4472 R 4 0.6 1624:43 mysqld 1 root 15 0 2036 648 552 S 0 0.0 0:16.97 init 2 root RT 0 0 0 0 S 0 0.0 0:48.50 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:26.72 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:04.98 migration/1 6 root 34 19 0 0 0 R 0 0.0 0:27.51 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root RT 0 0 0 0 S 0 0.0 0:15.42 migration/2 9 root 34 19 0 0 0 S 0 0.0 0:26.50 ksoftirqd/2 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2

    Read the article

  • PC doesn't select right monitor sometimes

    - by Madhur Ahuja
    I have an ACER LCD monitor with Intel G33/G31 display chipset. The preferred resolution for monitor is 1440x900. The drivers for both monitor and display are correctly installed and displayed in Device Manager section. However, I have observed, sometimes, in Display properties, my PC shows Display Device on: VGA instead of Display Device on: Acer , and when that happens, the resolution becomes distorted and I am unable to switch back to 1440x900. However, this problem resolves itself automatically sometimes between reboots. Any idea what's going on ?

    Read the article

  • Open mysql only to localhost and a particular address

    - by Rodrigo Asensio
    My config: ubuntu server 9 and msyql 5 my.cnf = bind-address = 0.0.0.0 my iptables script = iptables -A INPUT -i eth0 -s 99.88.77.66 -p tcp --destination-port 3306 -j ACCEPT I can connect from any place to mysql, not only that IP. I made a iptables-save , /etc/init.d/netwokring restart... but I still can connect from any IP, any clue ?

    Read the article

  • How to select a server that supports Windows scheduled file IO

    - by Kristof Verbiest
    Background: I am developing an application that needs to read data from disk with a fairly consistent throughput. It is important that this throughput is not influenced by other actions that happen on the disk (e.g. by other processes). For this purpose, I was hoping to use the 'Scheduled File I/O' feature in Windows (throught the GetFileBandwithReservation and SetFileBandwithReservation functions). However, this StackOverflow question has thought me that this feature is only available if the device driver supports it. Currently I have no computer at my disposition that seems to support this feature (I have an HP Proliant server and a Dell Precision workstation). Question: If I were to order a new server, how can I know beforehand if this feature will be supported by the device driver? How 'upscale' does the server have to be? Has anybody used this feature with success and cares to share his experiences?

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • NetworkManager will not allow me to select WPA

    - by Mala
    Hi I have a fresh install of Crunchbang (basically Ubuntu with Openbox instead of Gnome). In any case, I can't connect to my WPA network - when I click on it in the nm-applet and it asks me to authenticate my only options are: WEP 10/128-bit key WEP 128-bit Passphrase LEAP Dynamic WEP (802.1x) wpa_supplicant is indeed installed, yet WPA does not get listed... any ideas?

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • .htaccess to restrict access to only select files

    - by Bryan Ward
    I have a directory in my webserver for which I would like to serve up only pdf files. I found I can restrict access using the .htaccess, and using something like <FilesMatch "\.(text,doc)"> Order allow,deny Deny from all Satisfy All </FilesMatch> to serve up everything except a regular expression. Is it possible to instead restrict access to only files which meet some regular expression?

    Read the article

  • Select different parts of an line

    - by Ricardo Sa
    I'm new to regexes and have a file that looks like this one: |60|493,93|1,6500| |60|95,72|1,6500| |60|43,88|1,6500| |60|972,46|1,6500| I used the regex (\|60|.*)(1,65) and I was able to find all the lines that have the information that I wanted to changed. How can I make an replace that when Notepad++ finds (\|60|.*)(1,65), the 60 should be replaced with 50: |50|493,93|1,6500| |50|95,72|1,6500| |50|43,88|1,6500| |50|972,46|1,6500| PS: here's an example of the full line: |C170|002|34067||44,14000|KG|493,93|0|0|020|1102||288,11|12,00|34,57|0|0|0|0|||0|0|0|60|493,93|1,6500|||8,15|60|493,93|7,6000|||37,54||

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • ec2 LAMP REDhat distro change mysql password error

    - by t q
    i am on ec2 plain linux and wish to change my mySQL password ive tried: sudo mysqladmin -u root -p '***old***' password '***new****' then it prompts me to enter password then i enter ***old*** but i keep getting an error message mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: YES)' question: how do i change my current password?

    Read the article

  • Recieving a 500 Internal Server error after login success?

    - by Jeremy Quick
    I created my first member login form which takes the typical username and password and then sends it to the code below: checklogin.php: mysql_connect($db_host, $db_username, $db_password) or die(mysql_error()); mysql_select_db($db_database) or die(mysql_error()); $username=$_POST['username']; $password=$_POST['password']; $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM users WHERE username='$username' and password='$password'"; $result=mysql_query($sql); $count=mysql_num_rows($result); if($count == 1) //ERROR APPEARS TO TAKE PLACE HERE { session_start(); $_SESSION['username'] = $username; $_SESSION['password'] = $password; header('login_success.php'); } else { header("location:login_fail.php"); } If I type in the wrong information everything works properly so I know the error appears to be taking effect in the marked if statement. I have been searching the internet now looking for solutions but none seem to match mine or I am overlooking them. I've made a few changes which brought me to this point, before I was receiving deprecation warnings. Also, I have checked the logs and they are empty of errors relating to this.

    Read the article

  • recommend a host for .NET+ffmpeg+MySql+UrlRewrite?

    - by acidzombie24
    I need a host that supports the following ffmpeg (for audio) + oggenc which i found online MySql server (my code doesnt like MS SQL) Url rewriting. One server i tried didnt support /path/title because there was no .aspx and they did not allow modification to server since it was shared I configured apache in the past and was clueless with win server 08rc2. I prefer not to configure my own server. Also i am trying to keep the hosting price

    Read the article

  • On HP Mini, unable to select 800x600 resolution

    - by Roboto
    I have an HP Mini laptop. I can only make resolution setting for my display of 1024x576. The HP Deskjet 6988 driver only allows resolution settings of 800x600. I don't care how 800x600 would look on my laptop, I only want to install the driver for the printer and set it back. I went into the registry, but it was showing a resolution setting of 800x600. How else can I set the resolution or at least add the option in my Display Properties for 800x600?

    Read the article

  • How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet

    - by mv
    How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet In this blog I will write about how to extract a cert and key from NSS Db and import it to a JKS Keystore and then import that JKS Keystore into Oracle Wallet. 1. Set Java Home I pointed it to JRE 1.6.0_22 $ export JAVA_HOME=/usr/java/jre1.6.0_22/ 2. Create a self signed ECC cert in NSS DB I created NSS DB with self signed ECC certificate. If you already have NSS Db with ECC cert (and key) skip this step. $export NSS_DIR=/export/home/nss/ $$NSS_DIR/certutil -N -d . $$NSS_DIR/certutil -S -x -s "CN=test,C=US" -t "C,C,C" -n ecc-cert -k ec -q nistp192 -d . 3. Export ECC cert and key using pk12util Use NSS tool pk12util to export this cert and key into a p12 file      $$NSS_DIR/pk12util -o ecc-cert.p12 -n ecc-cert -d . -W password 4. Use keytool to create JKS keystore and import this p12 file 4.1 Import p12 file created above into a JKS keystore $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v But if an error as shown is encountered, keytool error: java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available        at com.sun.net.ssl.internal.pkcs12.PKCS12KeyStore.engineGetKey(Unknown Source)         at java.security.KeyStoreSpi.engineGetEntry(Unknown Source)         at java.security.KeyStore.getEntry(Unknown Source)         at sun.security.tools.KeyTool.recoverEntry(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStoreSingle(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStore(Unknown Source)         at sun.security.tools.KeyTool.doCommands(Unknown Source)         at sun.security.tools.KeyTool.run(Unknown Source)         at sun.security.tools.KeyTool.main(Unknown Source) Caused by: java.security.NoSuchAlgorithmException: EC KeyFactory not available         at java.security.KeyFactory.<init>(Unknown Source)         at java.security.KeyFactory.getInstance(Unknown Source)         ... 9 more 4.2 Create a new PKCS11 provider If you didn't get an error as shown above skip this step. Since we already have NSS libraries built with ECC, we can create a new PKCS11 provider Create ${java.home}/jre/lib/security/nss.cfg as follows: name = NSS     nssLibraryDirectory = ${nsslibdir}    nssDbMode = noDb    attributes = compatibility where nsslibdir should contain NSS libs with ECC support. Add the following line to ${java.home}/jre/lib/security/java.security :      security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg Note that those who are using Oracle iPlanet Web Server or Oracle Traffic Director, NSS libs built with ECC are in <ws_install_dir>/lib or <otd_install_dir>/lib. 4.3. Now keytool should work Now you can try the same keytool command and see that it succeeds : $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v [Storing ecc.jks] 5. Convert JKS keystore into an Oracle Wallet You can export this cert and key from JKS keystore and import it into an Oracle Wallet if you need using orapki tool as shown below. Make sure that orapki you use supports ECC. Also for ECC you MUST use "-jsafe" option. $ orapki wallet create -pwd password  -wallet .  -jsafe $ orapki wallet jks_to_pkcs12 -wallet . -pwd password -keystore ecc.jks -jkspwd password -jsafe AS $orapki wallet display -wallet . -pwd welcome1  -jsafeOracle PKI Tool : Version 11.1.2.0.0Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.Requested Certificates:User Certificates:Subject:        CN=test,C=USTrusted Certificates:Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=USSubject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=test,C=US As you can see our ECC cert in the wallet. You can follow the same steps for RSA certs as well. 6. References http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=356 http://old.nabble.com/-PATCH-FOR-REVIEW-%3A-Support-PKCS11-cryptography-via-NSS-p25282932.html http://www.mozilla.org/projects/security/pki/nss/tools/pk12util.html

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >