Search Results

Search found 22897 results on 916 pages for 'query processing'.

Page 42/916 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • SQL SERVER – Fix: Error: 147 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference

    - by pinaldave
    Everybody was beginner once and I always like to get involved in the questions from beginners. There is a big difference between the question from beginner and question from advanced user. I have noticed that if an advanced user gets an error, they usually need just a small hint to resolve the problem. However, when a beginner gets error he sometimes sits on the error for a long time as he/she has no idea about how to solve the problem as well have no idea regarding what is the capability of the product. I recently received a very novice level question. When I received the problem I quickly see how the user was stuck. When I replied him with the solution, he wrote a long email explaining how he was not able to solve the problem. He thanked multiple times in the email. This whole thing inspired me to write this quick blog post. I have modified the user’s question to match the code with AdventureWorks as well simplified so it contains the core content which I wanted to discuss. Problem Statement: Find all the details of SalesOrderHeaders for the latest ShipDate. He comes up with following T-SQL Query: SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = MAX(ShipDate) GO When he executed above script it gave him following error: Msg 147, Level 15, State 1, Line 3 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. He was not able to resolve this problem, even though the solution was given in the query description itself. Due to lack of experience he came up with another version of above query based on the error message. SELECT * FROM [Sales].[SalesOrderHeader] HAVING ShipDate = MAX(ShipDate) GO When he ran above query it produced another error. Msg 8121, Level 16, State 1, Line 3 Column ‘Sales.SalesOrderHeader.ShipDate’ is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. What he wanted actually was the SalesOrderHeader all the Sales shipped on the last day. Based on the problem statement what the right solution is as following, which does not generate error. SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = (SELECT MAX(ShipDate) FROM [Sales].[SalesOrderHeader]) Well, that’s it! Very simple. With SQL Server there are always multiple solution to a single problem. Is there any other solution available to the problem stated? Please share in the comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • RHEL 5.5 Yum Update Fails Dependency Error

    - by user65788
    I have 30 different RHEL 5.5 machines that will not update some 33 packages via Yum. Does anyone know why these packages will not install and how to correct this? Yum clean all does not fix the issue, however skip broken will allow other updates to install but I am really after a way to clear this up for good. They are stock boxes with RHEL subscription and not using any yum repositories other than Red Hat's own official repositories. They have not been updated for over a year! yum update Loaded plugins: rhnplugin, security rhel-i386-client-5 | 1.4 kB 00:00 rhel-i386-client-5/primary | 2.8 MB 00:09 rhel-i386-client-5 6607/6607 Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package autofs.i386 1:5.0.1-0.rc2.143.el5_5.6 set to be updated ---> Package cpp.i386 0:4.1.2-48.el5 set to be updated --> Processing Dependency: curl = 7.15.5-2.1.el5_3.5 for package: curl-devel ---> Package curl.i386 0:7.15.5-9.el5 set to be updated --> Processing Dependency: cyrus-sasl-lib = 2.1.22-5.el5 for package: cyrus-sasl-devel ---> Package cyrus-sasl-lib.i386 0:2.1.22-5.el5_4.3 set to be updated ---> Package cyrus-sasl-md5.i386 0:2.1.22-5.el5_4.3 set to be updated ---> Package cyrus-sasl-plain.i386 0:2.1.22-5.el5_4.3 set to be updated --> Processing Dependency: db4 = 4.3.29-10.el5 for package: db4-devel ---> Package db4.i386 0:4.3.29-10.el5_5.2 set to be updated --> Processing Dependency: dbus = 1.1.2-12.el5 for package: dbus-devel ---> Package dbus.i386 0:1.1.2-14.el5 set to be updated ---> Package dbus-libs.i386 0:1.1.2-14.el5 set to be updated ---> Package dbus-x11.i386 0:1.1.2-14.el5 set to be updated ---> Package e2fsprogs.i386 0:1.39-23.el5_5.1 set to be updated --> Processing Dependency: e2fsprogs-libs = 1.39-23.el5 for package: e2fsprogs-devel ---> Package e2fsprogs-libs.i386 0:1.39-23.el5_5.1 set to be updated ---> Package esc.i386 0:1.1.0-12.el5 set to be updated --> Processing Dependency: expat = 1.95.8-8.2.1 for package: expat-devel ---> Package expat.i386 0:1.95.8-8.3.el5_5.3 set to be updated ---> Package firefox.i386 0:3.6.13-2.el5 set to be updated --> Processing Dependency: freetype = 2.2.1-21.el5_3 for package: freetype-devel ---> Package freetype.i386 0:2.2.1-28.el5_5.1 set to be updated --> Processing Dependency: gcc = 4.1.2-46.el5_4.1 for package: gcc-c++ --> Processing Dependency: gcc = 4.1.2-46.el5_4.1 for package: gcc-gfortran ---> Package gcc.i386 0:4.1.2-48.el5 set to be updated --> Processing Dependency: gd = 2.0.33-9.4.el5_1.1 for package: gd-devel ---> Package gd.i386 0:2.0.33-9.4.el5_4.2 set to be updated --> Processing Dependency: gnome-vfs2 = 2.16.2-4.el5 for package: gnome-vfs2-devel ---> Package gnome-vfs2.i386 0:2.16.2-6.el5_5.1 set to be updated ---> Package gnome-vfs2-smb.i386 0:2.16.2-6.el5_5.1 set to be updated --> Processing Dependency: gnutls = 1.4.1-3.el5_3.5 for package: gnutls-devel ---> Package gnutls.i386 0:1.4.1-3.el5_4.8 set to be updated --> Processing Dependency: gtk2 = 2.10.4-20.el5 for package: gtk2-devel ---> Package gtk2.i386 0:2.10.4-21.el5_5.6 set to be updated --> Processing Dependency: hal = 0.5.8.1-52.el5 for package: hal-devel ---> Package hal.i386 0:0.5.8.1-59.el5 set to be updated --> Processing Dependency: krb5-libs = 1.6.1-36.el5 for package: krb5-devel ---> Package krb5-libs.i386 0:1.6.1-36.el5_5.6 set to be updated ---> Package krb5-workstation.i386 0:1.6.1-36.el5_5.6 set to be updated --> Processing Dependency: libXi = 1.0.1-3.1 for package: libXi-devel ---> Package libXi.i386 0:1.0.1-4.el5_4 set to be updated --> Processing Dependency: libXrandr = 1.1.1-3.1 for package: libXrandr-devel ---> Package libXrandr.i386 0:1.1.1-3.3 set to be updated --> Processing Dependency: libXt = 1.0.2-3.1.fc6 for package: libXt-devel ---> Package libXt.i386 0:1.0.2-3.2.el5 set to be updated --> Processing Dependency: libgfortran = 4.1.2-46.el5_4.1 for package: gcc-gfortran ---> Package libgfortran.i386 0:4.1.2-48.el5 set to be updated --> Processing Dependency: libsepol = 1.15.2-2.el5 for package: libsepol-devel ---> Package libsepol.i386 0:1.15.2-3.el5 set to be updated --> Processing Dependency: libstdc++ = 4.1.2-46.el5_4.1 for package: gcc-c++ --> Processing Dependency: libstdc++ = 4.1.2-46.el5_4.1 for package: libstdc++-devel ---> Package libstdc++.i386 0:4.1.2-48.el5 set to be updated --> Processing Dependency: mesa-libGL = 6.5.1-7.7.el5 for package: mesa-libGL-devel ---> Package mesa-libGL.i386 0:6.5.1-7.8.el5 set to be updated --> Processing Dependency: mesa-libGLU = 6.5.1-7.7.el5 for package: mesa-libGLU-devel ---> Package mesa-libGLU.i386 0:6.5.1-7.8.el5 set to be updated --> Processing Dependency: newt = 0.52.2-12.el5_4.1 for package: newt-devel ---> Package newt.i386 0:0.52.2-15.el5 set to be updated --> Processing Dependency: nspr = 4.7.6-1.el5_4 for package: nspr-devel ---> Package nspr.i386 0:4.8.6-1.el5 set to be updated --> Processing Dependency: nss = 3.12.3.99.3-1.el5_3.2 for package: nss-devel ---> Package nss.i386 0:3.12.8-1.el5 set to be updated ---> Package nss-tools.i386 0:3.12.8-1.el5 set to be updated --> Processing Dependency: openldap = 2.3.43-3.el5 for package: openldap-devel ---> Package openldap.i386 0:2.3.43-12.el5_5.3 set to be updated ---> Package openldap-clients.i386 0:2.3.43-12.el5_5.3 set to be updated --> Processing Dependency: openssl = 0.9.8e-12.el5 for package: openssl-devel ---> Package openssl.i686 0:0.9.8e-12.el5_5.7 set to be updated --> Processing Dependency: pam = 0.99.6.2-6.el5 for package: pam-devel ---> Package pam.i386 0:0.99.6.2-6.el5_5.2 set to be updated --> Processing Dependency: popt = 1.10.2.3-18.el5 for package: rpm-devel --> Processing Dependency: popt = 1.10.2.3-18.el5 for package: rpm-build ---> Package popt.i386 0:1.10.2.3-20.el5_5.1 set to be updated --> Processing Dependency: python = 2.4.3-27.el5 for package: python-devel ---> Package python.i386 0:2.4.3-27.el5_5.3 set to be updated --> Processing Dependency: rpm = 4.4.2.3-18.el5 for package: rpm-devel --> Processing Dependency: rpm = 4.4.2.3-18.el5 for package: rpm-build ---> Package rpm.i386 0:4.4.2.3-20.el5_5.1 set to be updated --> Processing Dependency: rpm-libs = 4.4.2.3-18.el5 for package: rpm-devel --> Processing Dependency: rpm-libs = 4.4.2.3-18.el5 for package: rpm-build ---> Package rpm-libs.i386 0:4.4.2.3-20.el5_5.1 set to be updated ---> Package rpm-python.i386 0:4.4.2.3-20.el5_5.1 set to be updated ---> Package xulrunner.i386 0:1.9.2.13-3.el5 set to be updated ---> Package xulrunner-devel.i386 0:1.9.2.7-2.el5 set to be updated --> Processing Dependency: xulrunner = 1.9.2.7-2.el5 for package: xulrunner-devel --> Processing Dependency: nss-devel >= 3.12.6 for package: xulrunner-devel --> Processing Dependency: nspr-devel >= 4.8 for package: xulrunner-devel --> Processing Dependency: libnotify-devel for package: xulrunner-devel ---> Package yelp.i386 0:2.16.0-26.el5 set to be updated rhel-i386-client-5/filelists | 16 MB 00:45 --> Finished Dependency Resolution xulrunner-devel-1.9.2.7-2.el5.i386 from rhel-i386-client-5 has depsolving problems --> Missing Dependency: libnotify-devel is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) mesa-libGLU-devel-6.5.1-7.7.el5.i386 from installed has depsolving problems --> Missing Dependency: mesa-libGLU = 6.5.1-7.7.el5 is needed by package mesa-libGLU-devel-6.5.1-7.7.el5.i386 (installed) python-devel-2.4.3-27.el5.i386 from installed has depsolving problems --> Missing Dependency: python = 2.4.3-27.el5 is needed by package python-devel-2.4.3-27.el5.i386 (installed) nss-devel-3.12.3.99.3-1.el5_3.2.i386 from installed has depsolving problems --> Missing Dependency: nss = 3.12.3.99.3-1.el5_3.2 is needed by package nss-devel-3.12.3.99.3-1.el5_3.2.i386 (installed) libstdc++-devel-4.1.2-46.el5_4.1.i386 from installed has depsolving problems --> Missing Dependency: libstdc++ = 4.1.2-46.el5_4.1 is needed by package libstdc++-devel-4.1.2-46.el5_4.1.i386 (installed) xulrunner-devel-1.9.2.7-2.el5.i386 from rhel-i386-client-5 has depsolving problems --> Missing Dependency: nspr-devel >= 4.8 is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) gcc-c++-4.1.2-46.el5_4.1.i386 from installed has depsolving problems --> Missing Dependency: libstdc++ = 4.1.2-46.el5_4.1 is needed by package gcc-c++-4.1.2-46.el5_4.1.i386 (installed) rpm-devel-4.4.2.3-18.el5.i386 from installed has depsolving problems --> Missing Dependency: rpm-libs = 4.4.2.3-18.el5 is needed by package rpm-devel-4.4.2.3-18.el5.i386 (installed) xulrunner-devel-1.9.2.7-2.el5.i386 from rhel-i386-client-5 has depsolving problems --> Missing Dependency: xulrunner = 1.9.2.7-2.el5 is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) nspr-devel-4.7.6-1.el5_4.i386 from installed has depsolving problems --> Missing Dependency: nspr = 4.7.6-1.el5_4 is needed by package nspr-devel-4.7.6-1.el5_4.i386 (installed) libXrandr-devel-1.1.1-3.1.i386 from installed has depsolving problems --> Missing Dependency: libXrandr = 1.1.1-3.1 is needed by package libXrandr-devel-1.1.1-3.1.i386 (installed) libsepol-devel-1.15.2-2.el5.i386 from installed has depsolving problems --> Missing Dependency: libsepol = 1.15.2-2.el5 is needed by package libsepol-devel-1.15.2-2.el5.i386 (installed) libXt-devel-1.0.2-3.1.fc6.i386 from installed has depsolving problems --> Missing Dependency: libXt = 1.0.2-3.1.fc6 is needed by package libXt-devel-1.0.2-3.1.fc6.i386 (installed) mesa-libGL-devel-6.5.1-7.7.el5.i386 from installed has depsolving problems --> Missing Dependency: mesa-libGL = 6.5.1-7.7.el5 is needed by package mesa-libGL-devel-6.5.1-7.7.el5.i386 (installed) openldap-devel-2.3.43-3.el5.i386 from installed has depsolving problems --> Missing Dependency: openldap = 2.3.43-3.el5 is needed by package openldap-devel-2.3.43-3.el5.i386 (installed) openssl-devel-0.9.8e-12.el5.i386 from installed has depsolving problems --> Missing Dependency: openssl = 0.9.8e-12.el5 is needed by package openssl-devel-0.9.8e-12.el5.i386 (installed) dbus-devel-1.1.2-12.el5.i386 from installed has depsolving problems --> Missing Dependency: dbus = 1.1.2-12.el5 is needed by package dbus-devel-1.1.2-12.el5.i386 (installed) newt-devel-0.52.2-12.el5_4.1.i386 from installed has depsolving problems --> Missing Dependency: newt = 0.52.2-12.el5_4.1 is needed by package newt-devel-0.52.2-12.el5_4.1.i386 (installed) gnome-vfs2-devel-2.16.2-4.el5.i386 from installed has depsolving problems --> Missing Dependency: gnome-vfs2 = 2.16.2-4.el5 is needed by package gnome-vfs2-devel-2.16.2-4.el5.i386 (installed) gnutls-devel-1.4.1-3.el5_3.5.i386 from installed has depsolving problems --> Missing Dependency: gnutls = 1.4.1-3.el5_3.5 is needed by package gnutls-devel-1.4.1-3.el5_3.5.i386 (installed) rpm-build-4.4.2.3-18.el5.i386 from installed has depsolving problems --> Missing Dependency: rpm-libs = 4.4.2.3-18.el5 is needed by package rpm-build-4.4.2.3-18.el5.i386 (installed) gd-devel-2.0.33-9.4.el5_1.1.i386 from installed has depsolving problems --> Missing Dependency: gd = 2.0.33-9.4.el5_1.1 is needed by package gd-devel-2.0.33-9.4.el5_1.1.i386 (installed) e2fsprogs-devel-1.39-23.el5.i386 from installed has depsolving problems --> Missing Dependency: e2fsprogs-libs = 1.39-23.el5 is needed by package e2fsprogs-devel-1.39-23.el5.i386 (installed) xulrunner-devel-1.9.2.7-2.el5.i386 from rhel-i386-client-5 has depsolving problems --> Missing Dependency: nss-devel >= 3.12.6 is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) krb5-devel-1.6.1-36.el5.i386 from installed has depsolving problems --> Missing Dependency: krb5-libs = 1.6.1-36.el5 is needed by package krb5-devel-1.6.1-36.el5.i386 (installed) gcc-gfortran-4.1.2-46.el5_4.1.i386 from installed has depsolving problems --> Missing Dependency: libgfortran = 4.1.2-46.el5_4.1 is needed by package gcc-gfortran-4.1.2-46.el5_4.1.i386 (installed) curl-devel-7.15.5-2.1.el5_3.5.i386 from installed has depsolving problems --> Missing Dependency: curl = 7.15.5-2.1.el5_3.5 is needed by package curl-devel-7.15.5-2.1.el5_3.5.i386 (installed) pam-devel-0.99.6.2-6.el5.i386 from installed has depsolving problems --> Missing Dependency: pam = 0.99.6.2-6.el5 is needed by package pam-devel-0.99.6.2-6.el5.i386 (installed) rpm-build-4.4.2.3-18.el5.i386 from installed has depsolving problems --> Missing Dependency: rpm = 4.4.2.3-18.el5 is needed by package rpm-build-4.4.2.3-18.el5.i386 (installed) expat-devel-1.95.8-8.2.1.i386 from installed has depsolving problems --> Missing Dependency: expat = 1.95.8-8.2.1 is needed by package expat-devel-1.95.8-8.2.1.i386 (installed) gcc-c++-4.1.2-46.el5_4.1.i386 from installed has depsolving problems --> Missing Dependency: gcc = 4.1.2-46.el5_4.1 is needed by package gcc-c++-4.1.2-46.el5_4.1.i386 (installed) gtk2-devel-2.10.4-20.el5.i386 from installed has depsolving problems --> Missing Dependency: gtk2 = 2.10.4-20.el5 is needed by package gtk2-devel-2.10.4-20.el5.i386 (installed) gcc-gfortran-4.1.2-46.el5_4.1.i386 from installed has depsolving problems --> Missing Dependency: gcc = 4.1.2-46.el5_4.1 is needed by package gcc-gfortran-4.1.2-46.el5_4.1.i386 (installed) cyrus-sasl-devel-2.1.22-5.el5.i386 from installed has depsolving problems --> Missing Dependency: cyrus-sasl-lib = 2.1.22-5.el5 is needed by package cyrus-sasl-devel-2.1.22-5.el5.i386 (installed) rpm-devel-4.4.2.3-18.el5.i386 from installed has depsolving problems --> Missing Dependency: popt = 1.10.2.3-18.el5 is needed by package rpm-devel-4.4.2.3-18.el5.i386 (installed) db4-devel-4.3.29-10.el5.i386 from installed has depsolving problems --> Missing Dependency: db4 = 4.3.29-10.el5 is needed by package db4-devel-4.3.29-10.el5.i386 (installed) rpm-build-4.4.2.3-18.el5.i386 from installed has depsolving problems --> Missing Dependency: popt = 1.10.2.3-18.el5 is needed by package rpm-build-4.4.2.3-18.el5.i386 (installed) rpm-devel-4.4.2.3-18.el5.i386 from installed has depsolving problems --> Missing Dependency: rpm = 4.4.2.3-18.el5 is needed by package rpm-devel-4.4.2.3-18.el5.i386 (installed) libXi-devel-1.0.1-3.1.i386 from installed has depsolving problems --> Missing Dependency: libXi = 1.0.1-3.1 is needed by package libXi-devel-1.0.1-3.1.i386 (installed) hal-devel-0.5.8.1-52.el5.i386 from installed has depsolving problems --> Missing Dependency: hal = 0.5.8.1-52.el5 is needed by package hal-devel-0.5.8.1-52.el5.i386 (installed) freetype-devel-2.2.1-21.el5_3.i386 from installed has depsolving problems --> Missing Dependency: freetype = 2.2.1-21.el5_3 is needed by package freetype-devel-2.2.1-21.el5_3.i386 (installed) Error: Missing Dependency: libgfortran = 4.1.2-46.el5_4.1 is needed by package gcc-gfortran-4.1.2-46.el5_4.1.i386 (installed) Error: Missing Dependency: libsepol = 1.15.2-2.el5 is needed by package libsepol-devel-1.15.2-2.el5.i386 (installed) Error: Missing Dependency: libstdc++ = 4.1.2-46.el5_4.1 is needed by package gcc-c++-4.1.2-46.el5_4.1.i386 (installed) Error: Missing Dependency: mesa-libGL = 6.5.1-7.7.el5 is needed by package mesa-libGL-devel-6.5.1-7.7.el5.i386 (installed) Error: Missing Dependency: mesa-libGLU = 6.5.1-7.7.el5 is needed by package mesa-libGLU-devel-6.5.1-7.7.el5.i386 (installed) Error: Missing Dependency: freetype = 2.2.1-21.el5_3 is needed by package freetype-devel-2.2.1-21.el5_3.i386 (installed) Error: Missing Dependency: hal = 0.5.8.1-52.el5 is needed by package hal-devel-0.5.8.1-52.el5.i386 (installed) Error: Missing Dependency: libXt = 1.0.2-3.1.fc6 is needed by package libXt-devel-1.0.2-3.1.fc6.i386 (installed) Error: Missing Dependency: openldap = 2.3.43-3.el5 is needed by package openldap-devel-2.3.43-3.el5.i386 (installed) Error: Missing Dependency: libstdc++ = 4.1.2-46.el5_4.1 is needed by package libstdc++-devel-4.1.2-46.el5_4.1.i386 (installed) Error: Missing Dependency: nss-devel >= 3.12.6 is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) Error: Missing Dependency: newt = 0.52.2-12.el5_4.1 is needed by package newt-devel-0.52.2-12.el5_4.1.i386 (installed) Error: Missing Dependency: gnutls = 1.4.1-3.el5_3.5 is needed by package gnutls-devel-1.4.1-3.el5_3.5.i386 (installed) Error: Missing Dependency: gnome-vfs2 = 2.16.2-4.el5 is needed by package gnome-vfs2-devel-2.16.2-4.el5.i386 (installed) Error: Missing Dependency: libXrandr = 1.1.1-3.1 is needed by package libXrandr-devel-1.1.1-3.1.i386 (installed) Error: Missing Dependency: python = 2.4.3-27.el5 is needed by package python-devel-2.4.3-27.el5.i386 (installed) Error: Missing Dependency: gcc = 4.1.2-46.el5_4.1 is needed by package gcc-c++-4.1.2-46.el5_4.1.i386 (installed) Error: Missing Dependency: libnotify-devel is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) Error: Missing Dependency: popt = 1.10.2.3-18.el5 is needed by package rpm-devel-4.4.2.3-18.el5.i386 (installed) Error: Missing Dependency: openssl = 0.9.8e-12.el5 is needed by package openssl-devel-0.9.8e-12.el5.i386 (installed) Error: Missing Dependency: curl = 7.15.5-2.1.el5_3.5 is needed by package curl-devel-7.15.5-2.1.el5_3.5.i386 (installed) Error: Missing Dependency: xulrunner = 1.9.2.7-2.el5 is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) Error: Missing Dependency: nspr = 4.7.6-1.el5_4 is needed by package nspr-devel-4.7.6-1.el5_4.i386 (installed) Error: Missing Dependency: nss = 3.12.3.99.3-1.el5_3.2 is needed by package nss-devel-3.12.3.99.3-1.el5_3.2.i386 (installed) Error: Missing Dependency: popt = 1.10.2.3-18.el5 is needed by package rpm-build-4.4.2.3-18.el5.i386 (installed) Error: Missing Dependency: libXi = 1.0.1-3.1 is needed by package libXi-devel-1.0.1-3.1.i386 (installed) Error: Missing Dependency: nspr-devel >= 4.8 is needed by package xulrunner-devel-1.9.2.7-2.el5.i386 (rhel-i386-client-5) Error: Missing Dependency: pam = 0.99.6.2-6.el5 is needed by package pam-devel-0.99.6.2-6.el5.i386 (installed) Error: Missing Dependency: rpm = 4.4.2.3-18.el5 is needed by package rpm-build-4.4.2.3-18.el5.i386 (installed) Error: Missing Dependency: cyrus-sasl-lib = 2.1.22-5.el5 is needed by package cyrus-sasl-devel-2.1.22-5.el5.i386 (installed) Error: Missing Dependency: gtk2 = 2.10.4-20.el5 is needed by package gtk2-devel-2.10.4-20.el5.i386 (installed) Error: Missing Dependency: dbus = 1.1.2-12.el5 is needed by package dbus-devel-1.1.2-12.el5.i386 (installed) Error: Missing Dependency: db4 = 4.3.29-10.el5 is needed by package db4-devel-4.3.29-10.el5.i386 (installed) Error: Missing Dependency: rpm-libs = 4.4.2.3-18.el5 is needed by package rpm-build-4.4.2.3-18.el5.i386 (installed) Error: Missing Dependency: gcc = 4.1.2-46.el5_4.1 is needed by package gcc-gfortran-4.1.2-46.el5_4.1.i386 (installed) Error: Missing Dependency: expat = 1.95.8-8.2.1 is needed by package expat-devel-1.95.8-8.2.1.i386 (installed) Error: Missing Dependency: gd = 2.0.33-9.4.el5_1.1 is needed by package gd-devel-2.0.33-9.4.el5_1.1.i386 (installed) Error: Missing Dependency: krb5-libs = 1.6.1-36.el5 is needed by package krb5-devel-1.6.1-36.el5.i386 (installed) Error: Missing Dependency: rpm = 4.4.2.3-18.el5 is needed by package rpm-devel-4.4.2.3-18.el5.i386 (installed) Error: Missing Dependency: rpm-libs = 4.4.2.3-18.el5 is needed by package rpm-devel-4.4.2.3-18.el5.i386 (installed) Error: Missing Dependency: e2fsprogs-libs = 1.39-23.el5 is needed by package e2fsprogs-devel-1.39-23.el5.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The repolist is yum repolist all Loaded plugins: rhnplugin, security repo id repo name status rhel-debuginfo Red Hat Enterprise Linux 5Client - i386 - Deb disabled rhel-debuginfo-beta Red Hat Enterprise Linux 5Client Beta - i386 disabled rhel-i386-client-5 Red Hat Enterprise Linux Desktop (v. 5 for 32 enabled: 6,607 repolist: 6,607

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Solution – Challenge – Puzzle – Usage of FAST Hint

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same from Brad Schulz. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Challenge – Puzzle – Usage of FAST Hint The question was in what condition the hint FAST will be useful. In the response to this puzzle blog post here is what SQL Server Expert Brad Schulz has pointed me to his blog post where he explain how FAST hint can be useful. I strongly recommend to read his blog post over here. With the permission of the Brad, I am reproducing following queries here. He has come up with example where FAST hint improves the performance. USE AdventureWorks GO DECLARE @DesiredDateAtMidnight DATETIME = '20010709' DECLARE @NextDateAtMidnight DATETIME = DATEADD(DAY,1,@DesiredDateAtMidnight) -- Query without FAST SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID; -- Query with FAST(10) SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID OPTION(FAST 10) Now when you check the execution plan for the same, you will find following visible difference. You will find query with FAST returns results with much lower cost. Thank you Brad for excellent post and teaching us something. I request all of you to read original blog post written by Brad for much more information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function PERCENTILE_CONT(). The book online gives following definition of this function: Computes a specific percentile for sorted values in an entire rowset or within distinct partitions of a rowset in Microsoft SQL Server 2012 Release Candidate 0 (RC 0). For a given percentile value P, PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest CUME_DIST value (with respect to the same sort specification) that is greater than or equal to P. If you are clear with understanding of the function – no need to read further. If you got lost here is the same in simple words – it is lot like finding median with percentile value. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: You can see that I have used PERCENTILE_COUNT(0.5) in query, which is similar to finding median. Let me explain above diagram with little more explanation. The defination of median is as following: In case of Even Number of elements = In ordered list add the two digits from the middle and devide by 2 In case of Odd Numbers of elements = In ordered list select the digits from the middle I hope this example gives clear idea how PERCENTILE_CONT() works. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Oracle Text query parser

    - by Roger Ford
    Oracle Text provides a rich query syntax which enables powerful text searches.However, this syntax isn't intended for use by inexperienced end-users.  If you provide a simple search box in your application, you probably want users to be able to type "Google-like" searches into the box, and have your application convert that into something that Oracle Text understands.For example if your user types "windows nt networking" then you probably want to convert this into something like"windows ACCUM nt ACCUM networking".  But beware - "NT" is a reserved word, and needs to be escaped.  So let's escape all words:"{windows} ACCUM {nt} ACCUM {networking}".  That's fine - until you start introducing wild cards. Then you must escape only non-wildcarded searches:"win% ACCUM {nt} ACCUM {networking}".  There are quite a few other "gotchas" that you might encounter along the way.Then there's the issue of scoring.  Given a query for "oracle text query syntax", it would be nice if we could score a full phrase match higher than a hit where all four words are present but not in a phrase.  And then perhaps lower than that would be a document where three of the four terms are present.  Progressive relaxation helps you with this, but you need to code the "progression" yourself in most cases.To help with this, I've developed a query parser which will take queries in Google-like syntax, and convert them into Oracle Text queries. It's designed to be as flexible as possible, and will generate either simple queries or progressive relaxation queries. The input string will typically just be a string of words, such as "oracle text query syntax" but the grammar does allow for more complex expressions:  word : score will be improved if word exists  +word : word must exist  -word : word CANNOT exist  "phrase words" : words treated as phrase (may be preceded by + or -)  field:(expression) : find expression (which allows +,- and phrase as above) within "field". So for example if I searched for   +"oracle text" query +syntax -ctxcatThen the results would have to contain the phrase "oracle text" and the word syntax. Any documents mentioning ctxcat would be excluded from the results. All the instructions are in the top of the file (see "Downloads" at the bottom of this blog entry).  Please download the file, read the instructions, then try it out by running "parser.pls" in either SQL*Plus or SQL Developer.I am also uploading a test file "test.sql". You can run this and/or modify it to run your own tests or run against your own text index. test.sql is designed to be run from SQL*Plus and may not produce useful output in SQL Developer (or it may, I haven't tried it).I'm putting the code up here for testing and comments. I don't consider it "production ready" at this point, but would welcome feedback.  I'm particularly interested in comments such as "The instructions are unclear - I couldn't figure out how to do XXX" "It didn't work in my environment" (please provide as many details as possible) "We can't use it in our application" (why not?) "It needs to support XXX feature" "It produced an invalid query output when I fed in XXXX" Downloads: parser.pls test.sql

    Read the article

  • SQL SERVER – A Funny Cartoon on Index

    - by pinaldave
    Performance Tuning has been my favorite subject and I have done it for many years now. Today I will list one of the most common conversation about Index I have heard in my life. Every single time, I am at consultation for performance tuning I hear following conversation among various team members. I want to ask you, does this kind of conversation happens in your organization? Any way, If you think Index solves all of your performance problem I think it is not true. There are many other reason one has to consider along with Indexes. For example I consider following various topic one need to understand for performance tuning. ?Logical Query Processing ?Efficient Join Techniques ?Query Tuning Considerations ?Avoiding Common Performance Tuning Issues Statistics and Best Practices ?TempDB Tuning ?Hardware Planning ?Understanding Query Processor ?Using SQL Server 2005 and 2008 Updated Feature Sets ?CPU, Memory, I/O Bottleneck Index Tuning (of course) ?Many more… Well, I have written this blog thinking I will keep this blog post a bit easy and not load up. I will in future discuss about other performance tuning concepts. Let me know what do you think about the cartoon I made. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Humor, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

    - by Pinal Dave
    Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words. You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014. Action Item Here are the blog posts I have previously written. You can read it over here: Simple Demo of New Cardinality Estimation Features of SQL Server 2014 Pluralsight Course You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Video

    Read the article

  • Nesting Linq-to-Objects query within Linq-to-Entities query –what is happening under the covers?

    - by carewithl
    var numbers = new int[] { 1, 2, 3, 4, 5 }; var contacts = from c in context.Contacts where c.ContactID == numbers.Max() | c.ContactID == numbers.FirstOrDefault() select c; foreach (var item in contacts) Console.WriteLine(item.ContactID); Linq-to-Entities query is first translated into Linq expression tree, which is then converted by Object Services into command tree. And if Linq-to-Entities query nests Linq-to-Objects query, then this nested query also gets translated into an expression tree. a) I assume none of the operators of the nested Linq-to-Objects query actually get executed, but instead data provider for particular DB (or perhaps Object Services) knows how to transform the logic of Linq-to-Objects operators into appropriate SQL statements? b) Data provider knows how to create equivalent SQL statements only for some of the Linq-to-Objects operators? c) Similarly, data provider knows how to create equivalent SQL statements only for some of the non-Linq methods in the Net Framework class library? EDIT: I know only some Sql so I can't be completely sure, but reading Sql query generated for the above code it seems data provider didn't actually execute numbers.Max method, but instead just somehow figured out that numbers.Max should return the maximum value and then proceed to include in generated Sql query a call to TSQL's build-in MAX function. It also put all the values held by numbers array into a Sql query. SELECT CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN '0X0X' ELSE '0X1X' END AS [C1], [Extent1].[ContactID] AS [ContactID], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[Title] AS [Title], [Extent1].[AddDate] AS [AddDate], [Extent1].[ModifiedDate] AS [ModifiedDate], [Extent1].[RowVersion] AS [RowVersion], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[CustomerTypeID] END AS [C2], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[InitialDate] END AS [C3], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryDesintation] END AS [C4], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryDestination] END AS [C5], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryActivity] END AS [C6], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryActivity] END AS [C7], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[Notes] END AS [C8], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[RowVersion] END AS [C9], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[BirthDate] END AS [C10], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[HeightInches] END AS [C11], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[WeightPounds] END AS [C12], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[DietaryRestrictions] END AS [C13] FROM [dbo].[Contact] AS [Extent1] LEFT OUTER JOIN (SELECT [Extent2].[ContactID] AS [ContactID], [Extent2].[BirthDate] AS [BirthDate], [Extent2].[HeightInches] AS [HeightInches], [Extent2].[WeightPounds] AS [WeightPounds], [Extent2].[DietaryRestrictions] AS [DietaryRestrictions], [Extent3].[CustomerTypeID] AS [CustomerTypeID], [Extent3].[InitialDate] AS [InitialDate], [Extent3].[PrimaryDesintation] AS [PrimaryDesintation], [Extent3].[SecondaryDestination] AS [SecondaryDestination], [Extent3].[PrimaryActivity] AS [PrimaryActivity], [Extent3].[SecondaryActivity] AS [SecondaryActivity], [Extent3].[Notes] AS [Notes], [Extent3].[RowVersion] AS [RowVersion], cast(1 as bit) AS [C1] FROM [dbo].[ContactPersonalInfo] AS [Extent2] INNER JOIN [dbo].[Customers] AS [Extent3] ON [Extent2].[ContactID] = [Extent3].[ContactID]) AS [Project1] ON [Extent1].[ContactID] = [Project1].[ContactID] LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll3].[C1] AS [C1] FROM (SELECT [UnionAll2].[C1] AS [C1] FROM (SELECT [UnionAll1].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable1] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable2]) AS [UnionAll1] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable3]) AS [UnionAll2] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable4]) AS [UnionAll3] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable5]) AS [c]) AS [Limit1] ON 1 = 1 LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll7].[C1] AS [C1] FROM (SELECT [UnionAll6].[C1] AS [C1] FROM (SELECT [UnionAll5].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable6] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable7]) AS [UnionAll5] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable8]) AS [UnionAll6] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable9]) AS [UnionAll7] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable10]) AS [c]) AS [Limit2] ON 1 = 1 CROSS JOIN (SELECT MAX([UnionAll12].[C1]) AS [A1] FROM (SELECT [UnionAll11].[C1] AS [C1] FROM (SELECT [UnionAll10].[C1] AS [C1] FROM (SELECT [UnionAll9].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable11] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable12]) AS [UnionAll9] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable13]) AS [UnionAll10] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable14]) AS [UnionAll11] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable15]) AS [UnionAll12]) AS [GroupBy1] WHERE [Extent1].[ContactID] IN ([GroupBy1].[A1], (CASE WHEN ([Limit1].[C1] IS NULL) THEN 0 ELSE [Limit2].[C1] END)) Based on this, is it possible that Linq2Entities provider indeed doesn't execute non-Linq and Linq-to-Object methods, but instead creates equivalent SQL statements for some of them ( and for others it throws an exception )? Thank you in advance

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How-to query af:quickQuery on page load ?

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A quick query component doesn't execute the query on page load. Check the "Query Automatically" checkbox in the ViewCriteria definition does not work as it does for the af:query component or list of values. To automatically query the af:quickQuery component, select the page's PageDef.xml file and expand the Executables node. Select the ImplicitViewCriteriaQuery entry and set the InitialQueryOverriden property to true.

    Read the article

  • #DAX Query Plan in SQL Server 2012 #Tabular

    - by Marco Russo (SQLBI)
    The SQL Server Profiler provides you many information regarding the internal behavior of DAX queries sent to a BISM Tabular model. Similar to MDX, also in DAX there is a Formula Engine (FE) and a Storage Engine (SE). The SE is usually handled by Vertipaq (unless you are using DirectQuery mode) and Vertipaq SE Query classes of events gives you a SQL-like syntax that represents the query sent to the storage engine. Another interesting class of events is the DAX Query Plan , which contains a couple...(read more)

    Read the article

  • SQL SERVER MAXDOP Settings to Limit Query to Run on Specific CPU

    This is very simple and known tip. Query Hint MAXDOP – Maximum Degree Of Parallelism can be set to restrict query to run on a certain CPU. Please note that this query cannot restrict or dictate which CPU to be used, but for sure, it restricts the usage of number of CPUs in a single [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • How to select a MAX value from column in Query Builder in Kohana framework?

    - by Victor Czechov
    I need to INSERT a data to table, but before a query I must to know the MAX value from column position, than to INSERT a data WHERE my SELECTED before position+1. Is it possible with query builder? following my first comment I did query: $p = DB::select(array(DB::expr('MAX(`position`)', 'p')))->from('supercategories')->execute(); echo $p; the error: ErrorException [ Notice ]: Undefined offset: 1 MODPATH\database\classes\kohana\database.php [ 505 ] 500 */ 501 public function quote_column($column) 502 { 503 if (is_array($column)) 504 { 505 list($column, $alias) = $column; 506 } 507 508 if ($column instanceof Database_Query) 509 { 510 // Create a sub-query

    Read the article

  • How can I choose different hints for different joins for a single table in a query hint?

    - by RenderIn
    Suppose I have the following query: select * from A, B, C, D where A.x = B.x and B.y = C.y and A.z = D.z I have indexes on A.x and B.x and B.y and C.y and D.z There is no index on A.z. How can I give a hint to this query to use an INDEX hint on A.x but a USE_HASH hint on A.z? It seems like hints only take the table name, not the specific join, so when using a single table with multiple joins I can only specify a single strategy for all of them. Alternative, suppose I'm using a LEADING or ORDERED hint on the above query. Both of these hints only take a table name as well, so how can I ensure that the A.x = B.x join takes place before the A.z = D.z one? I realize in this case I could list D first, but imagine D subsequently joins to E and that the D-E join is the last one I want in the entire query. A third configuration -- Suppose I want the A.x join to be the first of the entire query, and I want the A.z join to be the last one. How can I use a hint to have a single join from A to take place, followed by the B-C join, and the A-D join last?

    Read the article

  • will a mysql query run slower if one of the tables involved has no index defined??

    - by lock
    there's this already populated database which came from another dev im not sure what went on that dev's mind when he created the tables, but on one of our scripts there is this query involving 4 tables and it runs super slow SELECT a.col_1, a.col_2, a.col_3, a.col_4, a.col_5, a.col_6, a.col_7 FROM a, b, c, d WHERE a.id = b.id AND b.c_id = c.id AND c.id = d.c_id AND a.col_8 = '$col_8' AND d.g_id = '$g_id' AND c.private = '1' NOTE: $col_8 and $g_id are variables from a form its only my theory that it's due to tables b and c not having an index, although im guessing that the dev didnt think that it was necessary since those tables only tell relations between a and d, where b tells that the data in a belongs to a certain user, and c tells that the user belongs to a group in d as you can see, there's not even a join or other extensive query functions used but this query which returns only around 100 rows takes 2 minutes to execute. anyway my question is simply this post's title. will a mysql query run slower if one of the tables involved has no index defined??

    Read the article

  • What is the easiest way to find a sql query returns a result or not?

    - by bala3569
    Consider the following sql server query , DECLARE @Table TABLE( Wages FLOAT ) INSERT INTO @Table SELECT 20000 INSERT INTO @Table SELECT 15000 INSERT INTO @Table SELECT 10000 INSERT INTO @Table SELECT 45000 INSERT INTO @Table SELECT 50000 SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(ORDER BY Wages DESC) RowID FROM @Table ) sub WHERE RowID = 3 The result of the query would be 20000 ..... Thats fine as of now i have to find the result of this query, SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(ORDER BY Wages DESC) RowID FROM @Table ) sub WHERE RowID = 6 It will not give any result because there are only 5 rows in the table..... so now my question is What is the easiest way to find a sql query returns a result or not?

    Read the article

  • Deploy only cube schema, without processing

    - by Ken Yao
    Is there a way to only deploy cube schema, but without processing the cube. It seems in Visual Studio, when yo deploy a cube, by default, it is "Deploy and Process". The problem is processing takes so much time, and my main purpose is just writing some MDX script and see if it works well against the cube structure. It seems processing whole cube is just over kill. So I ask.

    Read the article

  • Kohana Database Library - How to Execute a Query with the Same Parameters More Than Once?

    - by Noah Goodrich
    With the following bit of code: $builder = ORM::factory('branch')->where('institution_id', $this->institution->id)->orderby('name'); I need to first execute: $count = $builder->count_all(); Then I need to execute: $rs = $builder->find_all($limit, $offset); However, it appears that when I execute the first query the stored query parameters are cleared so that a fresh query can be executed. Is there a way to avoid having the parameters cleared or at least copy them easily without having to reach directly into the Database driver object that stores the query parameters and copy them out? We are using Kohana 2.3.4 and upgrading is not an option.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >