Search Results

Search found 21433 results on 858 pages for 'query execution plans'.

Page 22/858 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • MySQL enters another value that the one given by PHP

    - by Tristan
    Hello, The big problem : mysql does not stores the information i told him to via PHP Example (this req is an echo just before the query) : INSERT INTO serveur (GSP_nom , IPserv, port, tickrate, membre, nomPays, finContrat, type, jeux, slot, ipClient, email) VALUES ( 'ckras', '88.191.88.57', '37060', '100' , '', 'Allemagne','20110519', '2', '4','99' ,'82.220.201.183','[email protected]'); But on the MySQL i have : 403 ckras 88.191.88.57 32767 100 Allemagne 20110519 1 2010-04-25 00:51:47 2 4 99 82.220.201.183 [email protected] port : 37060 (right value) //// 32767 (MySQL's drug?) Any help would be appreciated, i'm worse than stuck and i'm ** off PS: *There is no trigger on the mysql as far as i know / there is no controll on the port which means that nowhere i modify the "port" value and this script works for 80% of the time ( it seems that as soon as the users enters a port = 30000 it causes that bug), an user first reported to me this error today and the script was running since 3 months* Thanks

    Read the article

  • Trouble with a query

    - by Mark Allison
    Hi there, I'm having trouble with a query in SQL Server 2008 on some forex trading data. I have a trades table and an orders table. A trade needs to comprise of 2 or more orders. DDL schema and sample data below. What I want to do is write a query that shows the profit/loss in pips for each trade. A pip is 1/1000th of a currency. So the difference between USD 1.3441 and 1.3442 is 1 pip in forex-speak. A trade usually has one entry order and multiple exit orders. So for example if I buy 3 lots of the currency pair GBP/USD at the exchange rate of 1.6100 and then sell 1 lot at 1.6150, 1 lot at 1.6200 and 1 lot at 1.6250 then the profit is (1.6150 - 1.6100) + (1.6200 - 1.6100) + (1.6250 - 1.6100), or 50 + 100 + 150 = 300 pips profit. The trade could also go the other way (Shorting). For example the currency pair can be sold first before it's bought back later at a cheaper price. I would like a query that returns the following: tradeId, currencyPair, profitInPips It seems like a pretty straightforward query, but it's eluding me right now. Here's my DDL and sample data: CREATE TABLE [dbo].[trades]( [tradeId] [int] IDENTITY(1,1) NOT NULL, [currencyPair] [char](6) NOT NULL, CONSTRAINT [PK_trades] PRIMARY KEY CLUSTERED ( [tradeId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO SET IDENTITY_INSERT [dbo].[trades] ON INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (1, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (2, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (3, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (4, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (5, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (6, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (7, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (8, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (9, N'GBPUSD') INSERT [dbo].[trades] ([tradeId], [currencyPair]) VALUES (10, N'GBPUSD') SET IDENTITY_INSERT [dbo].[trades] OFF GO CREATE TABLE [dbo].[orders]( [orderId] [int] IDENTITY(1,1) NOT NULL, [tradeId] [int] NOT NULL, [amount] [decimal](18, 1) NOT NULL, [buySell] [char](1) NOT NULL, [rate] [decimal](18, 6) NOT NULL, [orderDateTime] [datetime] NOT NULL, CONSTRAINT [PK_orders] PRIMARY KEY CLUSTERED ( [orderId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO SET IDENTITY_INSERT [dbo].[orders] ON INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (1, 1, CAST(3.0 AS Decimal(18, 1)), N'S', CAST(1.606500 AS Decimal(18, 6)), CAST(0x00009CF40083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (2, 1, CAST(3.0 AS Decimal(18, 1)), N'B', CAST(1.615500 AS Decimal(18, 6)), CAST(0x00009CF400A4CB80 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (3, 2, CAST(3.0 AS Decimal(18, 1)), N'S', CAST(1.608000 AS Decimal(18, 6)), CAST(0x00009CF500000000 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (4, 2, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.603000 AS Decimal(18, 6)), CAST(0x00009CF50083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (5, 2, CAST(2.0 AS Decimal(18, 1)), N'B', CAST(1.605500 AS Decimal(18, 6)), CAST(0x00009CF50107AC00 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (6, 3, CAST(3.0 AS Decimal(18, 1)), N'S', CAST(1.595500 AS Decimal(18, 6)), CAST(0x00009CF70083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (7, 3, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.590500 AS Decimal(18, 6)), CAST(0x00009CF700C5C100 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (8, 3, CAST(2.0 AS Decimal(18, 1)), N'B', CAST(1.594500 AS Decimal(18, 6)), CAST(0x00009CF701499700 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (9, 4, CAST(3.0 AS Decimal(18, 1)), N'B', CAST(1.611000 AS Decimal(18, 6)), CAST(0x00009CFB0083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (10, 4, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.616000 AS Decimal(18, 6)), CAST(0x00009CFB00A4CB80 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (11, 4, CAST(2.0 AS Decimal(18, 1)), N'S', CAST(1.611500 AS Decimal(18, 6)), CAST(0x00009CFB0107AC00 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (12, 5, CAST(3.0 AS Decimal(18, 1)), N'B', CAST(1.613000 AS Decimal(18, 6)), CAST(0x00009CFC0083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (13, 5, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.618000 AS Decimal(18, 6)), CAST(0x00009CFC0107AC00 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (14, 5, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.623000 AS Decimal(18, 6)), CAST(0x00009CFC0083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (15, 5, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.628000 AS Decimal(18, 6)), CAST(0x00009CFD00C5C100 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (16, 6, CAST(3.0 AS Decimal(18, 1)), N'B', CAST(1.632000 AS Decimal(18, 6)), CAST(0x00009D020083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (17, 6, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.637000 AS Decimal(18, 6)), CAST(0x00009D0200A4CB80 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (18, 6, CAST(2.0 AS Decimal(18, 1)), N'S', CAST(1.630000 AS Decimal(18, 6)), CAST(0x00009D0200C5C100 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (19, 7, CAST(3.0 AS Decimal(18, 1)), N'B', CAST(1.634500 AS Decimal(18, 6)), CAST(0x00009D0201499700 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (20, 7, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.639500 AS Decimal(18, 6)), CAST(0x00009D0300000000 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (21, 7, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.644500 AS Decimal(18, 6)), CAST(0x00009D030083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (22, 7, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.637500 AS Decimal(18, 6)), CAST(0x00009D0300C5C100 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (23, 8, CAST(3.0 AS Decimal(18, 1)), N'S', CAST(1.625000 AS Decimal(18, 6)), CAST(0x00009D0400C5C100 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (24, 8, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.620000 AS Decimal(18, 6)), CAST(0x00009D050083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (25, 8, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.615000 AS Decimal(18, 6)), CAST(0x00009D0500A4CB80 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (26, 8, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.623000 AS Decimal(18, 6)), CAST(0x00009D050107AC00 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (27, 9, CAST(3.0 AS Decimal(18, 1)), N'S', CAST(1.618000 AS Decimal(18, 6)), CAST(0x00009D0600C5C100 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (28, 9, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.613000 AS Decimal(18, 6)), CAST(0x00009D0600D63BC0 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (29, 9, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.608000 AS Decimal(18, 6)), CAST(0x00009D0600E6B680 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (30, 9, CAST(1.0 AS Decimal(18, 1)), N'B', CAST(1.613300 AS Decimal(18, 6)), CAST(0x00009D0601391C40 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (31, 10, CAST(3.0 AS Decimal(18, 1)), N'B', CAST(1.614500 AS Decimal(18, 6)), CAST(0x00009D090083D600 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (32, 10, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.619500 AS Decimal(18, 6)), CAST(0x00009D090107AC00 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (33, 10, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.624500 AS Decimal(18, 6)), CAST(0x00009D0901499700 AS DateTime)) INSERT [dbo].[orders] ([orderId], [tradeId], [amount], [buySell], [rate], [orderDateTime]) VALUES (34, 10, CAST(1.0 AS Decimal(18, 1)), N'S', CAST(1.619000 AS Decimal(18, 6)), CAST(0x00009D0A0083D600 AS DateTime)) SET IDENTITY_INSERT [dbo].[orders] OFF /****** Object: ForeignKey [FK_orders_trades] Script Date: 04/02/2010 15:05:31 ******/ ALTER TABLE [dbo].[orders] WITH CHECK ADD CONSTRAINT [FK_orders_trades] FOREIGN KEY([tradeId]) REFERENCES [dbo].[trades] ([tradeId]) GO ALTER TABLE [dbo].[orders] CHECK CONSTRAINT [FK_orders_trades] GO Thanks in advance for any help!

    Read the article

  • Temporary Tables in Stored Procedures

    - by Paul White
    Ask anyone what the primary advantage of temporary tables over table variables is, and the chances are they will say that temporary tables support statistics and table variables do not. This is true, of course; even the indexes that enforce PRIMARY KEY and UNIQUE constraints on table variables do not have populated statistics associated with them, and it is not possible to manually create statistics or non-constraint indexes on table variables. Intuitively, then, any query that has alternative execution...(read more)

    Read the article

  • ssrs: the report execution has expired or cannot be found

    - by Alex Bransky
    Today I got an exception in a report using SQL Server Reporting Services 2008 R2, but only when attempting to go to the last page of a large report: The report execution sgjahs45wg5vkmi05lq4zaee has expired or cannot be found.;Digging into the logs I found this:library!ReportServer_0-47!149c!12/06/2012-12:37:58:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: , An error occurred within the report server database.  This may be due to a connection failure, timeout or low disk condition within the database.;I knew it wasn't a network problem or timeout because I could repeat the problem at will.  I checked the disk space and that seemed fine as well.  The real issue was a lack of memory on the database server that had the ReportServer database.  Restarting the SQL Server engine freed up plenty of RAM and the problem immediately went away.

    Read the article

  • ClearTrace Shows Execution History

    - by Bill Graziano
    The latest release of ClearTrace (Build 38) now shows the execution history of a particular statement. You’ll need to save the trace files to a trace group instead of just using the default.  That’s as easy as typing something into the trace group name when you upload the trace.  I usually put the server name in this field. Build 38 also re-enables support for statement level events.  If your trace includes RPC:StmtCompleted or SQL:StmtCompleted events those will be processed and save.  In the results tab you can choose to view statement level or batch level events.  Please note that saving statement level events in a trace can generate HUGE trace files very quickly.

    Read the article

  • How can I schedule execution of a program?

    - by Bakhtiyor
    Let's say I have a small "Hello World" Java program compiled in my home directory. I can run it with java helloWorld from my home directory and it executes without any problem. Now I need to schedule to execute this program let's say after 10mins from now. So, I am executing following commands on console: at now+10min warning: commands will be executed using /bin/sh at> java helloWorld Press CTRL+D to finish So it is scheduled properly as I can see it with at -l command. But at this time nothing happens. Why? What is wrong with it? Because, if instead of scheduling the execution my own program I schedule executing of gedit command it opens it at a specified time. But with my own program it doesn't perform anything. How can I change the situation?

    Read the article

  • Entity Framework: Connecting to a mdf user database file via localDB during script execution

    - by Marko Apfel
    Problem If you run the “Generate database from model” wizard and execute the generated script the destination database could be the wrong one (for instance master of the SQL Server). Solution To use an own mdf attachable user database some connection information must specified during script execution. Execute your script opens the dialog “Connect to Server”. Press “Options” and go to the second tab “Connection Properties”. Select “Browse server” in the “Connect to database” dropdown box: Confirm the information dialog with “Yes”. In the following dialog you could choose your user database. Now the schema is created in the user database.

    Read the article

  • Chrome 10 rend possible l'exécution d'applications Web en arrière plan, Google publie un exemple

    Chrome 10 rend possible l'exécution d'applications Web en arrière plan Même quand le navigateur est fermé, Google publie un exemple Mise à jour du 24/02/11 par Gordon Fowler Google vient de dévoiler une nouvelle fonctionnalité disponible dans la version 10 (en beta) de son navigateur Chrome. La fonctionnalité, baptisée « Background Pages », bien que n'ayant pas été mise en avant lors de la sortie Chrome 10, est bel et bien là. Elle permet d'exécuter des pages Web en arrière-plan de façon totalement transparente pour l'utilisateur. Certaines applications (qualifiées « d'applications d'arrière plan ») peuvent ainsi continuer à tourn...

    Read the article

  • Web Page Execution Internals

    - by octopusgrabbus
    My question is what is the subject area that covers web page execution/loading. I am looking to purchase a book by subject area that covers when things execute or load in a web page, whether it's straight html, html and Javascript, or a PHP page. Is that topic covered by a detailed html book, or should I expect to find information like that in a JavaScript of PHP book? I understand that PHP and Perl execute on the server and that Javascript is client side, and I know there is a lot of on-line documentation describing <html>, <head>, <body>, and so on. I'm just wondering what subject area a book would be in to cover all that, not a discussion of the best book or someone's favorite book, but the subject area.

    Read the article

  • Transparent PHP script execution using mod_rewrite

    - by tori3852
    I am looking for a solution for this a problem: I need that every HTTP request (method is irrelevant) in Apache http server would be served only after execution of specific PHP script. This is needed because I need to gather some information about requests, etc. As far as I understand - this could be achieved using mod_rewrite module. So far I have done this (in .htaccess file): RewriteEngine on RewriteRule ^(.*)$ script.php [C] script.php is executed, but I need that after this original request would be executed. Thanks - any help is appreciated.

    Read the article

  • NHibernate query with Projections.Cast to DateTime

    - by stiank81
    I'm experimenting with using a string for storing different kind of data types in a database. When I do queries I need to cast the strings to the right type in the query itself. I'm using .Net with NHibernate, and was glad to learn that there exists functionality for this. Consider the simple class: public class Foo { public string Text { get; set; } } I successfully use Projections.Cast to cast to numeric values, e.g. the following query correctly returns all Foos with an interger stored as int - between 1-10. var result = Session.CreateCriteria<Foo>() .Add(Restrictions.Between(Projections.Cast(NHibernateUtil.Int32, Projections.Property("Text")), 1, 10)) .List<Foo>(); Now if I try using this for DateTime I'm not able to make it work no matter what I try. Why?! var date = new DateTime(2010, 5, 21, 11, 30, 00); AddFooToDb(new Foo { Text = date.ToString() } ); // Will add it to the database... var result = Session .CreateCriteria<Foo>() .Add(Restrictions.Eq(Projections.Cast(NHibernateUtil.DateTime, Projections.Property("Text")), date)) .List<Foo>();

    Read the article

  • MySql query optimization help

    - by rohitgu
    I have few queries and am not able to figure out how to optimize them, QUERY 1 select * from t_twitter_tracking where classified is null and tweetType='ENGLISH' order by id limit 500; QUERY 2 Select count(*) as cnt, DATE_FORMAT(CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30'),'%Y-%m-%d') as dat from t_twitter_tracking wrdTrk where wrdTrk.word like ('dell') and CONVERT_TZ(wrdTrk.createdOnGMTDate,'+00:00','+05:30') between '2010-12-12 00:00:00' and '2010-12-26 00:00:00' group by dat; Both these queries run on the same table, CREATE TABLE `t_twitter_tracking` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `word` VARCHAR(200) NOT NULL, `tweetId` BIGINT(100) NOT NULL, `twtText` VARCHAR(800) NULL DEFAULT NULL, `language` TEXT NULL, `links` TEXT NULL, `tweetType` VARCHAR(20) NULL DEFAULT NULL, `source` TEXT NULL, `sourceStripped` TEXT NULL, `isTruncated` VARCHAR(40) NULL DEFAULT NULL, `inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL, `inReplyToUserId` INT(11) NULL DEFAULT NULL, `rtUsrProfilePicUrl` TEXT NULL, `isFavorited` VARCHAR(40) NULL DEFAULT NULL, `inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL, `latitude` BIGINT(100) NOT NULL, `longitude` BIGINT(100) NOT NULL, `retweetedStatus` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToStatusId` BIGINT(100) NOT NULL, `statusInReplyToUserId` BIGINT(100) NOT NULL, `statusFavorited` VARCHAR(40) NULL DEFAULT NULL, `statusInReplyToScreenName` TEXT NULL, `screenName` TEXT NULL, `profilePicUrl` TEXT NULL, `twitterId` BIGINT(100) NOT NULL, `name` TEXT NULL, `location` VARCHAR(100) NULL DEFAULT NULL, `bio` TEXT NULL, `url` TEXT NULL COLLATE 'latin1_swedish_ci', `utcOffset` INT(11) NULL DEFAULT NULL, `timeZone` VARCHAR(100) NULL DEFAULT NULL, `frenCnt` BIGINT(20) NULL DEFAULT '0', `createdAt` DATETIME NULL DEFAULT NULL, `createdOnGMT` VARCHAR(40) NULL DEFAULT NULL, `createdOnServerTime` DATETIME NULL DEFAULT NULL, `follCnt` BIGINT(20) NULL DEFAULT '0', `favCnt` BIGINT(20) NULL DEFAULT '0', `totStatusCnt` BIGINT(20) NULL DEFAULT NULL, `usrCrtDate` VARCHAR(200) NULL DEFAULT NULL, `humanSentiment` VARCHAR(30) NULL DEFAULT NULL, `replied` BIT(1) NULL DEFAULT NULL, `replyMsg` TEXT NULL, `classified` INT(32) NULL DEFAULT NULL, `createdOnGMTDate` DATETIME NULL DEFAULT NULL, `locationDetail` TEXT NULL, `geonameid` INT(11) NULL DEFAULT NULL, `country` VARCHAR(255) NULL DEFAULT NULL, `continent` CHAR(2) NULL DEFAULT NULL, `placeLongitude` FLOAT NULL DEFAULT NULL, `placeLatitude` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`id`), INDEX `id` (`id`, `word`), INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE, INDEX `word_index` (`word`) USING BTREE, INDEX `location_index` (`location`) USING BTREE, INDEX `classified_index` (`classified`) USING BTREE, INDEX `tweetType_index` (`tweetType`) USING BTREE, INDEX `getunclassified_index` (`classified`, `tweetType`) USING BTREE, INDEX `timeline_index` (`word`, `createdOnGMTDate`, `classified`) USING BTREE, INDEX `createdOnGMTDate_index` (`createdOnGMTDate`) USING BTREE, INDEX `locdetail_index` (`country`, `id`) USING BTREE, FULLTEXT INDEX `twtText_index` (`twtText`) ) COLLATE='utf8_general_ci' ENGINE=MyISAM ROW_FORMAT=DEFAULT AUTO_INCREMENT=12608048; The table has more than 10 million records. How can I optimize it?

    Read the article

  • Hibernate - query caching/second level cache does not work by value object containing subitems

    - by Zoltan Hamori
    Hi! I have been struggling with the following problem: I have a value object containing different panels. Each panel has a list of fields. Mapping: <class name="com.aviseurope.core.application.RACountryPanels" table="CTRY" schema="DBDEV1A" where="PEARL_CTRY='Y'" lazy="join"> <cache usage="read-only"/> <id name="ctryCode"> <column name="CTRY_CD_ID" sql-type="VARCHAR2(2)" not-null="true"/> </id> <bag name="panelPE" table="RA_COUNTRY_MAPPING" fetch="join" where="MANDATORY_FLAG!='N'"> <key column="COUNTRY_LOCATION_ID"/> <many-to-many class="com.aviseurope.core.application.RAFieldVO" column="RA_FIELD_MID" where="PANEL_ID='PE'"/> </bag> </class> I use the following criteria to get the value object: Session m_Session = HibernateUtil.currentSession(); m_Criteria = m_Session.createCriteria(RACountryPanels.class); m_Criteria.add(Expression.eq("ctryCode", p_Country)); m_Criteria.setCacheable(true); As I see the query cache contains only the main select like select * from CTRY where ctry_cd_id=? Both RACountryPanels and RAFieldVO are second level cached. If I check the 2nd level cache content I can see that it cointains the RAFields and the RACountryPanels as well and I can see the select .. from CTRY where ctry_cd_id=... in query cache region as well. When I call the servlet it seems that it is using the cache, but second time not. If I check the content of the cache using JMX, everything seems to be ok, but when I measure the object access time, it seems that it does not always use the cache. Cheers Zoltan

    Read the article

  • LDAP Query with sub result

    - by StefanE
    I have been banging my head for quite a while with this and can't get it to work. I have a LDAP Query I do have working in AD Users and Computers but dont know how to do it programatically in C#. Here are my LDAP Query that works fine in the AD Tool: (memberOf=CN=AccRght,OU=Groups,OU=P,OU=Server,DC=mydomain,DC=com)(objectCategory=user)(objectClass=user)(l=City) I have used this code to get the user accounts to get members of CN=AccRght but I'm not succeeding on limiting users belonging to a specific city. public StringCollection GetGroupMembers(string strDomain, string strGroup) { StringCollection groupMemebers = new StringCollection(); try { DirectoryEntry ent = new DirectoryEntry("LDAP://DC=" + strDomain + ",DC=com"); DirectorySearcher srch = new DirectorySearcher("(CN=" + strGroup + ")"); SearchResultCollection coll = srch.FindAll(); foreach (SearchResult rs in coll) { ResultPropertyCollection resultPropColl = rs.Properties; foreach( Object memberColl in resultPropColl["member"]) { DirectoryEntry gpMemberEntry = new DirectoryEntry("LDAP://" + memberColl); System.DirectoryServices.PropertyCollection userProps = gpMemberEntry.Properties; object obVal = userProps["sAMAccountName"].Value; if (null != obVal) { groupMemebers.Add(obVal.ToString()); } } } } catch (Exception ex) { Console.Write(ex.Message); } return groupMemebers; } Thanks for any help!

    Read the article

  • Require help in Writing Query

    - by harigm
    The following image have been uploaded to show what I am trying to do and what I wanted out of it Can any one help me write the Query to get the results what I want Please check the following SELECT * FROM KPT WHERE PROPERTY_ID IN (SELECT PROPERTY_ID FROM khata_header WHERE DIV_ID = 3 and RECORD_STATUS = 0) and CHALLAN_NO > 42646 The above is the query I have written and I have got the following result set ID CHALLAN_NO PROPERTY_ID SITE_NO TOTAL_AMOUNT ----- ------------- -------------- ------------------- --------------- 1242 42757 3103010141 296 595 1243 63743 3204190257 483 594 1244 63743 3204190257 483 594 1334 43395 3217010223 1088 576 1421 524210 3320050416 (null) (null) 1422 524210 3320050416 (null) (null) 1560 564355 3320021408 (null) (null) 1870 516292 3320040420 (null) (null) 1940 68357 3217100104 139 1153 1941 68357 3217100104 139 1153 2002 56256 3320100733 511 4430 2003 56256 3320100733 511 4430 2004 66488 3217040869 293 3094 2005 66488 3217040869 293 3094 2016 64571 3217040374 (null) (null) 2036 523122 3320020352 (null) (null) 2039 65682 3217040021 273 919 In my resultset, I am getting the PropertyId repeated, since there are multilple entries, How Can I know How many have been repeated What are those Property Id which have repeated more than 2 times. Little Back ground about the tables are PROPERTY_ID is the FK in the KPT PROPERTY_ID is the PK in KH I am writing a subquery to get the Result, so I am stuck I dont know how to get my results Please help

    Read the article

  • SQL query using information from 4 tables (not all directly linked)

    - by Yvonne
    I'm developing a simple classroom system, where teachers manage classes and their subjects. I have 2 levels of access in my teachers table, assigned by an integer (1 = admin, 2 = user)... Meaning that the headteacher is the admin :) A teacher (of level 1) can have have many classes and a class can have many teachers (so I have 'TeachersClasses' table). A class can have many subjects, and a teacher can have many subjects. Basically, I'm attempting a query to display the admin teacher's (level 1) subjects. However, only teachers with a level of 2, are directly related to a subject, which is set by the admin user. The headteacher can view all of their subjects via the classroom, but I cannot get all of the subjects to be displayed on one page, instead I can only get the subjects to appear under a specific classroom right now... This is what I have so far, which is returning nothing. (I'm guessing this may require an SQL clause more advanced that 'INNER JOIN' which is the only join type I am familiar with, and thought it would be enough! $query = "SELECT subjects.subjectid, subjects.subjectname, subjects.subjectdetails, classroom.classid, classroom.classname FROM subjects INNER JOIN classroom ON subjects.subjectid = classroom.classid INNER JOIN teacherclasses ON classroom.classid = teacherclasses.classid INNER JOIN teachers ON teacherclasses.teacherid = teachers.teacherid WHERE teachers.teacherid = '".intval( $_SESSION['SESS_TEACHERID'] )."'"; In order for all subjects related to the headteachers class to be displayed, I'm gathering that all of my tables will need to be called up here? Thanks for any help! Example output: subject name: maths // teacher: mr smith // classroom: DG99 x10 for all the subjects associated with the headteachers classrooms :)

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • SQL Query Returning Duplicate Results

    - by Jesse Bunch
    Hi, I've been working out this query now for a while and I thought I had it where I wanted it, but apparently not. There are two records in the database (orders). The query should return two different rows, but instead returns two rows that have exactly the same values. I think it may be something to do with the GROUP BY or derived tables I'm using but my eyes are tired and not seeing the problem. Can any of you help? Thanks in advance. SELECT orders.billerID, orders.invoiceDate, orders.txnID, orders.bName, orders.bStreet1, orders.bStreet2, orders.bCity, orders.bState, orders.bZip, orders.bCountry, orders.sName, orders.sStreet1, orders.sStreet2, orders.sCity, orders.sState, orders.sZip, orders.sCountry, orders.paymentType, orders.invoiceNotes, orders.pFee, orders.shipping, orders.tax, orders.reasonCode, orders.txnType, orders.customerID, customers.firstName AS firstName, customers.lastName AS lastName, customers.businessName AS businessName, orderStatus.statusName AS orderStatus, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax AS orderTotal, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax - IFNULL(payments.totalPayments, 0.00) AS orderBalance FROM orders LEFT JOIN customers ON orders.customerID = customers.id LEFT JOIN orderStatus ON orders.orderStatus = orderStatus.id LEFT JOIN ( SELECT orderItems.orderID, SUM(orderItems.itemPrice * orderItems.itemQuantity) as itemTotal FROM orderItems GROUP BY orderItems.orderID ) orderItems ON orderItems.orderID = orders.id LEFT JOIN ( SELECT payments.orderID, SUM(payments.amount) as totalPayments FROM payments GROUP BY payments.orderID ) payments ON payments.orderID = orders.id

    Read the article

  • Avoiding secondary selects or joins with Hibernate Criteria or HQL query

    - by Ben Benson
    I am having trouble optimizing Hibernate queries to avoid performing joins or secondary selects. When a Hibernate query is performed (criteria or hql), such as the following: return getSession().createQuery(("from GiftCard as card where card.recipientNotificationRequested=1").list(); ... and the where clause examines properties that do not require any joins with other tables... but Hibernate still performs a full join with other tables (or secondary selects depending on how I set the fetchMode). The object in question (GiftCard) has a couple ManyToOne associations that I would prefer to be lazily loaded in this case (but not necessarily all cases). I want a solution that I can control what is lazily loaded when I perform the query. Here's what the GiftCard Entity looks like: @Entity @Table(name = "giftCards") public class GiftCard implements Serializable { private static final long serialVersionUID = 1L; private String id_; private User buyer_; private boolean isRecipientNotificationRequested_; @Id public String getId() { return this.id_; } public void setId(String id) { this.id_ = id; } @ManyToOne @JoinColumn(name = "buyerUserId") @NotFound(action = NotFoundAction.IGNORE) public User getBuyer() { return this.buyer_; } public void setBuyer(User buyer) { this.buyer_ = buyer; } @Column(name="isRecipientNotificationRequested", nullable=false, columnDefinition="tinyint") public boolean isRecipientNotificationRequested() { return this.isRecipientNotificationRequested_; } public void setRecipientNotificationRequested(boolean isRecipientNotificationRequested) { this.isRecipientNotificationRequested_ = isRecipientNotificationRequested; } }

    Read the article

  • Can't get this SPARQL query to work

    - by Jason
    Okay, I'm just learning to use SPARQL to query data from dbpedia.org and I'm using dbpedia's http://dbpedia.org/snorql/ to run my queries in. I am trying to get a list of MusicalArtists based on searching for the same string over three fields like so: SELECT ?subject ?artistRdfsLabel ?artistFoafName ?artistDbpedia2Name WHERE { ?subject rdf:type <http://dbpedia.org/ontology/MusicalArtist> . OPTIONAL { ?subject rdfs:label ?artistRdfsLabel . } OPTIONAL { ?subject foaf:name ?artistFoafName . } OPTIONAL { ?subject dbpedia2:name ?artistDbpedia2Name . } FILTER ( str(?artistRdfsLabel) = "Stevie Nicks" || str(?artistFoafName) = "Stevie Nicks" || str(?artistDbpedia2Name) = "Stevie Nicks" ) } LIMIT 10 This works because "Stevie Nicks" has all three fields (rdfs:label, foaf:name, dbpedia2:name). But when I try to query by another MusicalArtist that doesn't have all three ("Depeche Mode" for example) I get no results. I have tried various things like BIND(COALESCE(?field,...,...) AS ?artistName) to filter by ?artistName and I also tried UNION but nothing seems to work. Can someone point out the error of my SPARQL ways? :) Thanks! Jason

    Read the article

  • Yahoo Query Language Problem

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

  • Query to MySQL from c# returns System.Byte[]

    - by karthik
    I am using the below SP to return the value of Generated Insert statement and it works fine when executed in Query browser. When i try to get the value from C#, it give's me "System.Byte[]" as return value. When i try to get the value from MySql query browser, it give's me return value as : 'insert into admindb.accounts values("54321","2","karthik2","karthik2","1");' I guess the problem is with the single quotes of the returned value. Is it so ? DELIMITER $$ DROP PROCEDURE IF EXISTS `admindb`.`InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') as MyColumn from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Query with many CASE statements - optimization

    - by Nemanja Vujacic
    Hi guys, I have one very dirty query that per sure can be optimized because there are so many CASE statements in it! SELECT (CASE pa.KplusTable_Id WHEN 1 THEN sp.sp_id WHEN 2 THEN fw.fw_id WHEN 3 THEN s.sw_Id WHEN 4 THEN id.ia_id END) as Deal_Id, max(CASE pa.KplusTable_Id WHEN 1 THEN sp.Trans_Id WHEN 2 THEN fw.Trans_Id WHEN 3 THEN s.Trans_Id WHEN 4 THEN id.Trans_Id END) as TransId_CurrentMax INTO #MaxRazlicitOdNull FROM #PotencijalniAktuelni pa LEFT JOIN kplus_sp sp (nolock) on sp.sp_id=pa.Deal_Id AND pa.KplusTable_Id=1 LEFT JOIN kplus_fw fw (nolock) on fw.fw_id=pa.Deal_Id AND pa.KplusTable_Id=2 LEFT JOIN dev_sw s (nolock) on s.sw_Id=pa.Deal_Id AND pa.KplusTable_Id=3 LEFT JOIN kplus_ia id (nolock) on id.ia_id=pa.Deal_Id AND pa.KplusTable_Id=4 WHERE isnull(CASE pa.KplusTable_Id WHEN 1 THEN sp.BROJ_TIKETA WHEN 2 THEN fw.BROJ_TIKETA WHEN 3 THEN s.tiket WHEN 4 THEN id.BROJ_TIKETA END, '')<>'' GROUP BY CASE pa.KplusTable_Id WHEN 1 THEN sp.sp_id WHEN 2 THEN fw.fw_id WHEN 3 THEN s.sw_Id WHEN 4 THEN id.ia_id END Because I have same condition couple times, do you have idea how to optimize query, make it simpler and better. All suggestions are welcome! TnX in advance! Nemanja

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >