Search Results

Search found 21433 results on 858 pages for 'query execution plans'.

Page 28/858 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. When we look at above resultset it is very clear that LEAD function gives us value which is going to come in next line and LAG function gives us value which was encountered in previous line. If we have to generate the same result without using this function we will have to use self join. In future blog post we will see the same. Let us explore this function a bit more. This function not only provide previous or next line but it can also access any line before or after using offset. Let us fun following query, where LEAD and LAG function accesses the row with offset of 2. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. You can see the LEAD and LAG functions  now have interval of  rows when they are returning results. As there is interval of two rows the first two rows in LEAD function and last two rows in LAG function will return NULL value. You can easily replace this NULL Value with any other default value by passing third parameter in LEAD and LAG function. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where NULL are now replaced with value 0. Just like any other analytic function we can easily partition this function as well. Let us see the use of PARTITION BY in this clause. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where now the data is partitioned by SalesOrderID and LEAD and LAG functions are returning the appropriate result in that window. As now there are smaller partition in my query, you will see higher presence of NULL. In future blog post we will see how this functions are compared to SELF JOIN. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING

    - by pinaldave
    Yesterday I had discussed two analytical functions FIRST_VALUE and LAST_VALUE. After reading the blog post I received very interesting question. “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” I find this question very interesting because this is very commonly made mistake. No there is no bug in the code. I think what we need is a bit more explanation. Let me attempt that first. Before you do that I suggest you read yesterday’s blog post as this question is related to that blog post. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: As per the reader’s question the value of the LAST_VALUE function should be always 114 and not increasing as the rows are increased. Let me re-write the above code once again with bit extra T-SQL Syntax. Please pay special attention to the ROW clause which I have added in the above syntax. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Now once again check the result of the above query. The result of both the query is same because in OVER clause the default ROWS selection is always UNBOUNDED PRECEDING AND CURRENT ROW. If you want the maximum value of the windows with OVER clause you need to change the syntax to UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING for ROW clause. Now run following query and pay special attention to ROW clause again. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Here is the resultset of the above query which is what questioner was asking. So in simple word, there is no bug but there is additional syntax needed to add to get your desired answer. The same logic also applies to PARTITION BY clause when used. Here is quick example of how we can further partition the query by SalesOrderDetailID with this new functions. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us windowed resultset on SalesOrderDetailsID as well give us FIRST and LAST value for the windowed resultset. There are lots to discuss for this two functions and we have just explored tip of the iceberg. In future post I will discover it further deep. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Launching php script through comman line - keeping terminal window open after execution

    - by somethis
    Oh, my girlfriend really likes it when I launch php scripts! There's something special about them, she says ... Thus, I coded this script to run throught the CLI (Command Line Interface) - so it's running locally, not on a web server. It launches just fine through right click open run in terminal but closes right after execution. **Is there a way to keep the terminal window open? Of course I can launch it through a terminal window - which would stay open - but I'm looking for a one click action. With bash scripts I use $SHELL but that didn't work (see code below). So far, the only thing I came up with is sleep(10); which gives me 10 seconds for my girl to check the output. I'd rather close the terminal window manually, though. #!/usr/bin/php -q <?php echo "Hello World \n"; # wait before closing terminal window sleep(10); # the following line doesn't work $SHELL; ?> (PHP 5.4.6-1ubuntu1.2 (cli) (built: Mar 11 2013 14:57:54) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2012 Zend Technologies )

    Read the article

  • Execution plan warnings–All that glitters is not gold

    - by Dave Ballantyne
    In a previous post, I showed you the new execution plan warnings related to implicit and explicit warnings.  Pretty much as soon as i hit ’post’,  I noticed something rather odd happening. This statement : select top(10) SalesOrderHeader.SalesOrderID, SalesOrderNumberfrom Sales.SalesOrderHeaderjoin Sales.SalesOrderDetail on SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID   Throws the “Type conversion may affect cardinality estimation” warning.     Ive done no such conversion in my statement why would that be ?  Well, SalesOrderNumber is a computed column , “(isnull(N'SO'+CONVERT([nvarchar](23),[SalesOrderID],0),N'*** ERROR ***'))”,  so thats where the conversion is.   Wait!!! Am i saying that every type conversion will throw the warning ?  Thankfully, no.  It only appears for columns that are used in predicates ,even if the predicate / join condition is fine ,  and the column is indexed ( and/or , presumably has statistics).    Hopefully , this wont lead to to many wild goose chases, but is definitely something to bear in mind.  If you want to see this fixed then upvote my connect item here.

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • Work Execution in EAM

    - by Annemarie Provisero
    ADVISOR WEBCAST: Work Execution in EAM PRODUCT FAMILY: Manufacturing Enterprise Asset Management July 5, 2011 at 8 am PT, 9 am MT, 11 am ET The purpose of this webcast is to discuss EAM Work Order Management. This one-hour session is ideal for Functional Users, System Administrators, Database Administrators, and Customers with a basic knowledge of EAM and who raise or manage work orders and related processes. During this webcast, Zar will cover the various types of work orders and look at all the related activities associated with work orders including: setup, operations, tasks, work order transactions, relationship and planning. TOPICS WILL INCLUDE: Work Order Types (Routine, Planned Maintenance, Rebuild, Easy) Work Order statuses and other important setups Operations and Tasks Relationships Work Order Transactions Work Order Planning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Begin Viewing Query Results Before Query Ends

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • SQL query mixing aggregated results and single values

    - by Paul Flowerdew
    I have a table with transactions. Each transaction has a transaction ID, and accounting period (AP), and a posting value (PV), as well as other fields. Some of the IDs are duplicated, usually because the transaction was done in error. To give an example, part of the table might look like: ID PV AP 123 100 2 123 -100 5 In this case the transaction was added in AP2 then removed in AP5. Another example would be: ID PV AP 456 100 2 456 -100 5 456 100 8 In the first example, the problem is that if I am analyzing what was spent in AP2, there is a transaction in there which actually shouldn't be taken into account because it was taken out again in AP5. In the second example, the second two transactions shouldn't be taken into account because they cancel each other out. I want to label as many transactions as possible which shouldn't be taken into account as erroneous. To identify these transactions, I want to find the ones with duplicate IDs whose PVs sum to zero (like ID 123 above) or transactions where the PV of the earliest one is equal to sum(PV), as in the second example. This second condition is what is causing me grief. So far I have SELECT * FROM table WHERE table.ID IN (SELECT table.ID FROM table GROUP BY table.ID HAVING COUNT(*) > 1 AND (SUM(table.PV) = 0 OR SUM(table.PV) = <PV of first transaction in each group>)) ORDER BY table.ID; The bit in chevrons is what I'm trying to do and I'm stuck. Can I do it like this or is there some other method I can use in SQL to do this? Edit 1: Btw I forgot to say that I'm using SQL Compact 3.5, in case it matters. Edit 2: I think the code snippet above is a bit misleading. I still want to mark out transactions with duplicate IDs where sum(PV) = 0, as in the first example. But where the PV of the earliest transaction = sum(PV), as in the second example, what I actually want is to keep the earliest transaction and mark out all the others with the same ID. Sorry if that caused confusion. Edit 3: I've been playing with Clodoaldo's solution and have made some progress, but still can't get quite what I want. I'm trying to get the transactions I know for certain to be erroneous. Suppose the following transactions are also in the table: ID PV AP 789 100 2 789 200 5 789 -100 8 In this example sum(PV) < 0 and the earliest PV < sum(PV) so I don't want to mark any of these out. If I modify Clodoaldo's query as follows: select t.* from t left join ( select id, min(ap) as ap, sum(pv) as sum_pv from t group by id having sum(pv) <> 0 ) s on t.id = s.id and t.ap = s.ap and t.pv = s.sum_pv where s.id is null This gives the result ID PV AP 123 100 2 123 -100 5 456 -100 5 456 100 8 789 100 3 789 200 5 789 -100 8 Whilst the first 4 transactions are ok (they would be marked out), the 789 transactions are also there, and I don't want them. But I can't figure out how to modify the query so that they're not included. Any ideas?

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • linq to xml query returning a list of all child elements

    - by Xience
    I have got an xml document which looks something like this. <Root> <Info>....</Info> <Info>....</Info> <response>....</response> <warning>....</warning> <Info>....</Info> </Root> How can i write a linqToXML query so that it returns me an IEnumerable containing each child element, in this case all five child elements of , so that i could iterate over them. The order of child elements is not definite, neither is number of times the may appear. Thanks in advance

    Read the article

  • SQL query to return a data that both creteria exist in one table

    - by Ali
    Dear all, I have a TWO tables of data with following fields table1=(ITTAG,ITCODE,ITDESC,SUPcode) table2=(ACCODE,ACNAME,ROUTE,SALMAN) this my customer master tables that contains my customer data such as customer code, customer name and so on... Every Route has a supervisor(table1=supcode) and I need to know supervisor name in my table which both supervisor name and code exist in one table. table1 has contain all names separated by ITTAG. for example, supervisor name's ITTAG='K' also salesamn name's ITTAG='S'. ITTAG ITCODE ITDESC SUPCODE ------ ------ ------ ------- S JT JOHN TOMAS TF K WK VIKI KOO NULL NOW THIS IS A RESULT WHICH I WANT ACCODE ACNAME ROUTE SALEMANNAME SUPERVISORNAME ------- ------ ------ ------------ --------------- IMC1010 ABC HOTEL 01 JOHN TOMAS VIKI KOO i hope this this information is sufficient to get the query.. Thanks Ali

    Read the article

  • mySQL - query to combine two tables

    - by W.Gerick
    Hi there, I have two tables. The first one holds information about cities: Locations: locID | locationID | locationName | countryCode | 1 | 2922239 | Berlin | de | 2 | 291074 | Paris | fr | 3 | 295522 | Orlando | us | 3 | 292345 | Tokyo | jp | There is a second table, which holds alternative names for locations. There might be NO alternative name for a location in the Locations table: AlternateNames: altNameID | locationID | alternateName | 1 | 2922239 | Berlino | 2 | 2922239 | Berlina | 3 | 291074 | Parisa | 4 | 291074 | Pariso | 5 | 295522 | Orlandola | 6 | 295522 | Orlandolo | What I would like to get is the locationID, name and the countryCode of a location for a location name search like "Berlin", or "Ber": | locationID | name | countryCode | | 2922239 | Berlin | de | However, if the user searches for "Berlino", I would like to get the alternateName back: | locationID | name | countryCode | | 2922239 | Berlino | de | The "locationName" has a higher priority than the alternateName, if the searchterm matches both. I can't figure out how to build a query to do that. Since the name can come from one of the two tables, it seems quite difficult to me. Any help is really appreciated!

    Read the article

  • SQL - re-arrange a table via query

    - by abelenky
    I have a poorly designed table that I inherited. It looks like: User Field Value ------------------- 1 name Aaron 1 email [email protected] 1 phone 800-555-4545 2 name Mike 2 email [email protected] 2 phone 777-123-4567 (etc, etc) I would love to extract this data via a query in the more sensible format: User Name Email Phone ------------------------------------------- 1 Aaron [email protected] 800-555-4545 2 Mike [email protected] 777-123-4567 I'm a SQL novice, but have tried several queries with variations of Group By, all without anything even close to success. Is there a SQL technique to make this easy?

    Read the article

  • Mysql Error in query statements

    - by Mark Estrada
    Hi All, I am trying to acquaint myself on Mysql syntax. I only have used MSSQL so far. I downloaded the Mysql Query Browser and have installed the Mysql Version 5.1 I wanted to run this line of code in the resultset tab of mysql but I keep on encountering below error You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'declare iCtr int' at line 1 declare iCtr int; set iCtr = 1; while iCtr < 1000 begin insert into employee (emp_id,emp_first_name,emp_last_name,status_id) values (iCtr, 'firstName' + iCtr, 'lastName' + iCtr, 1) set iCtr = iCtr + 1; end I just wanted to populate my employees table but I cannot get past the mysql syntax. Any advise please. Thanks

    Read the article

  • SQL Server Query solution cum Suggestion Required

    - by Nirmal
    Hello All... I have a following scenario in my SQL Server 2005 database. zipcodes table has following fields and value (just a sample): zipcode latitude longitude ------- -------- --------- 65201 123.456 456.789 65203 126.546 444.444 and place table has following fields and value : id name zip latitude longitude -- ---- --- -------- --------- 1 abc 65201 NULL NULL 2 def 65202 NULL NULL 3 ghi 65203 NULL NULL 4 jkl 65204 NULL NULL Now, my requirement is like I want to compare my zip codes of place table and update the available latitude and longitude fields from zipcode table. And there are some of the zipcodes which has no entry in zipcode table, so that should remain null. And the major issue is like I have more then 50,00,000 records in my db. So, query should support this feature. I have tried some of the solutions but unfortunately not getting proper output. Any help would be appreciated...

    Read the article

  • mysql query the latest date

    - by user295189
    I am running this query SEL ECT sh.*, u.initials AS initals FROM database1.table1 AS sh JOIN database2.user AS u ON u.userID = sh.userid WHERE id = 123456 AND dts = ( SELECT MAX(dts) from database1.table1 ) ORDER BY sort_by, category In the table1 I have records like this dts status category sort_by 2010-04-29 12:20:27 Civil Engineers Occupation 1 2010-04-28 12:20:27 Civil Engineers Occupation 1 2010-04-28 12:20:54 Married Marital Status 2 2010-04-28 12:21:15 Smoker Tobbaco 3 2010-04-27 12:20:27 Civil Engineers Occupation 1 2010-04-27 12:20:54 Married Marital Status 2 2010-04-27 12:21:15 Smoker Tobbaco 3 2010-04-26 12:20:27 Civil Engineers Occupation 1 2010-04-26 12:20:54 Married Marital Status 2 2010-04-26 12:21:15 Smoker Tobbaco 3 so if you look at my data, I am choosing the latest entry by category and sort_id. however in some case such as on 29th (2010-04-29 12:20:27) I have only one record. So in this case I want to show occupation for latest and then the rest of them (latest). But currently it displays only one row. Thanks

    Read the article

  • UriBuilder incorrectly encoding Query Parameters value ?

    - by Fred
    Lets consider the following code sample where a path and single parameter are encoded... Parameter name: "param" Parameter value: "foo/bar?aaa=bbb&ccc=ddd" (happens to be a url with query parameters) String test = UriBuilder.fromPath("https://dummy.com"). queryParam("param", "foo/bar?aaa=bbb&ccc=ddd"). build().toURL().toString(); The encoded URL string returned is: "https://dummy.com?param=foo/bar?aaa%3Dbbb&ccc%3Dddd" Is this correct ? Should not the character "&" (and may be even "?") be encoded in the parameter value string ? Would not the URL produced be interpreted as follow: One first parameter, name="param", value = "ar?aaa%3Dbbb" followed by a second parameter, name="ccc%3Dddd", without value.

    Read the article

  • SQL Query - Count column values separately

    - by user575535
    I have a hard time getting a Query to work right. This is the DDL for my Tables CREATE TABLE Agency ( id SERIAL not null, city VARCHAR(200) not null, PRIMARY KEY(id) ); CREATE TABLE Customer ( id SERIAL not null, fullname VARCHAR(200) not null, status VARCHAR(15) not null CHECK(status IN ('new','regular','gold')), agencyID INTEGER not null REFERENCES Agency(id), PRIMARY KEY(id) ); Sample Data from the Tables AGENCY id|'city' 1 |'London' 2 |'Moscow' 3 |'Beijing' CUSTOMER id|'fullname' |'status' |agencyid 1 |'Michael Smith' |'new' |1 2 |'John Doe' |'regular'|1 3 |'Vlad Atanasov' |'new' |2 4 |'Vasili Karasev'|'regular'|2 5 |'Elena Miskova' |'gold' |2 6 |'Kim Yin Lu' |'new' |3 7 |'Hu Jintao' |'regular'|3 8 |'Wen Jiabao' |'regular'|3 I want to produce the following output, but i need to count separately for ('new','regular','gold') 'city' |new_customers|regular_customers|gold_customers 'Moscow' |1 |1 |1 'Beijing'|1 |2 |0 'London' |1 |1 |0

    Read the article

  • Join Query returns empty result, unexpected result

    - by Abs
    Hello all, Can anyone explain why this query returns an empty result. SELECT * FROM (`bookmarks`) JOIN `tags` ON `tags`.`bookmark_id` = `bookmarks`.`id` WHERE `tag` = 'clean' AND `tag` = 'simple' In my bookmarks table, I have a bookmark with an id of 70 and in my tags table i have two tags 'clean' and 'simple' both that have the column bookmark_id as 70. I would of thought a result would have been returned? How can I remedy this so that I have the bookmark returned when it has a tag of 'clean' and 'simple'? Thanks all for any explanation and solution to this.

    Read the article

  • What is wrong with mysql query?

    - by bala3569
    I use the following mysql query, DELIMITER $$ DROP PROCEDURE IF EXISTS `allied`.`aboutus_delete`$$ CREATE DEFINER=`allied`@`%` PROCEDURE `aboutus_delete`( IN p_Id int(11) ) BEGIN if exists(select aboutUsId from aboutus where aboutUsId=p_id and isDeleted=0) update aboutus set isDeleted=1 where aboutUsId=p_id else select 'No record to delete' END$$ DELIMITER ; But i get this error when i execute it... Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'update aboutus set isDeleted=1 where aboutUsId=p_id else select 'No record to' at line 6

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >