Search Results

Search found 43978 results on 1760 pages for 'select case'.

Page 35/1760 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How to select an element by Class instead of ID in ASP.NET?

    - by firedrawndagger
    I have a few scatter elements which I am grouping together using a class like so - . In my code behind, using C# I want to hide these elements, using something like instructions.Visible = false;. However I realize I can only do this in codebehind if I use ID but this will result in invalid HTML/CSS Selector since you can't have multiple ID's with the same ID name... Alternatively is there another way to group the controls if not by class?

    Read the article

  • Zend_Db_Select: regrouping conditions in where clause

    - by pvledoux
    Hi, I would like to do something like this: $select = $myTbl->select() ->from('download_log') ->joinLeft(...... etc........ ->joinLeft(...... etc........ ->joinLeft(...... etc........); //Filter all configured bots (Google, Yahoo, etc.) if(isset($this->_config->statistics->bots)){ $bots = explode(',',$this->_config->statistics->bots); foreach ($bots as $bot){ $select = $select->orWhere("user_agent NOT LIKE '%$bot%'"); } } $select = $select->where("download_log.download_log_ts BETWEEN '".$start_date." 00:00:00' AND '".$end_date." 23:59:59'"); But the outputed query is not correct because of the orWhere clauses are not grouped together in a unique AND clause. I would like to know if it is possible to regrouped those OR clauses in a pair of parentheres. My current alternative is the following: //Filter all configured bots (Google, Yahoo, etc.) if(isset($this->_config->statistics->bots)){ $bots = explode(',',$this->_config->statistics->bots); foreach ($bots as $bot){ $stmt .= "user_agent NOT LIKE '%$bot%' OR "; } $stmt = substr($stmt,0,strlen($stmt)-3); //remove the last OR $select = $select->where("($stmt)"); } Thanks!

    Read the article

  • Advantage Database Server ORDER BY behaviour

    - by ie
    I'm using ADS v10 beta. I'm trying to numerate ordered resultset. 1) ORDER BY in nested queries. I need to use nested SELECT for some calculations: SELECT Name, Value, ROWNUM() FROM (SELECT * FROM MainTable WHERE Value > 0 ORDER BY Value) a And I'm getting Expected lexical element not found: ) There was a problem parsing the table names after the FROM keyword in your SELECT statement. Everything is working well when the ORDER BY is removed. Although, I found the sample in the Help, it looks like my query (more complex, indeed): SELECT * FROM (SELECT TOP 10 empid, fullname FROM branch1 ORDER BY empid) a UNION SELECT empid, fullname FROM branch2 ORDER BY empid 2) ORDER BY + ROWNUM(). I used the nested query in the example above, to numerate ordered rows. Is there are any chance to avoid nested query? In the SQL Server I can do something like this: SELECT Name, Value, ROW_NUMBER() OVER(ORDER BY Value) FROM MainTable WHERE Value > 1 ORDER BY Value Please advice. Thanks.

    Read the article

  • How to SELECT DISTINCT Info with TOP 1 Info and an Order By FROM the Top 1 Info

    - by Erin Taylor
    I have 2 tables, that look like: CustomerInfo(CustomterID, CustomerName) CustomerReviews(ReviewID, CustomerID, Review, Score) I want to search reviews for a string and return CustomerInfo.CustomerID and CustomerInfo.CustomerName. However, I only want to show distinct CustomerID and CustomerName along with just one of their CustomerReviews.Reviews and CustomerReviews.Score. I also want to order by the CustomerReviews.Score. I can't figure out how to do this, since a customer can leave multiple reviews, but I only want a list of customers with their highest scored review. Any ideas?

    Read the article

  • Ruby on Rails: Simple way to select all records of a nested model?

    - by Josh Pinter
    Just curious, I spent an embarrassing amount of time trying to get an array of all the records in a nested model. I just want to make sure there is not a better way. Here is the setup: I have three models that are nested under each other (Facilities Tags Inspections), producing code like this for routes.rb: map.resources :facilities do |facilities| facilities.resources :tags, :has_many => :inspections end I wanted to get all of the inspections for a facility and here is what my code ended up being: def facility_inspections @facility = Facility.find(params[:facility_id]) @inspections = [] @facility.tags.each do |tag| tag.inspections.each do |inspection| @inspections << inspection end end end It works but is this the best way to do this - I think it's cumbersome. Thanks in advance. Josh

    Read the article

  • create jquery array to use as options

    - by SoulieBaby
    Hi all, I'm sure this is really simple, but I can't seem to get it working. I have a "time" select list, which has a number as "rel" attached to each option. If the user changes the time select, I want a new list of options to display depending on what is selected. If that makes sense? Here's my first select: <select name="time" id="time"> <option value="7:00am" rel="10">7:00am</option> <option value="12:30pm" rel="16">12:30pm</option> </select> If the user selects 7:00am, I want a new option list (using jquery) to give options from 1 - 10. Like this: <select name="quantity" id="quantity"> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> ............................ <option value="10">10</option> </select> Here's what I have so far... <script type="text/javascript" language="javascript"> jQuery("#time").change(function(){ var positions = jQuery("#time :selected").attr("rel"); //this grabs the rel from time //this is where it should create a list of options to append(??) to the select list.. jQuery("#showQuantity").show(); //this shows the hidden field for quantity }); </script> I hope it makes sense, but I'm stuck on it. Thank you in advance :)

    Read the article

  • jQuery: Quick question. How to select string variable?

    - by user317563
    Hello world, EDIT: I would like to avoid doing something like this: var str = 'Hello'; if ( str == 'Hello') { alert(str); } I would rather do: var str = 'Hello'; $(str).filter(':contains("Hello")').each(function(){ alert(this) }); I've tried a lot of things: $(str).text().method1().method2().method3(); $(str).val().method1().method2().method3(); $(str).contents().method1().method2().method3(); Nothing worked. Is it possible to do this? Thank you for your time. Kind regards, Marius

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Select a word using keyboard special case

    - by wataka
    I know the Ctrl+Shift+Arrow_Key to select a word. But suppose this is the case: "Hello!, my na|me is Peter" Where the cursor is where the pipe is. I'm looking for some keyboard shorcut that select the word "name". The only way I found was: Ctrl+Right, Left, Ctrl+Shift+Left. But it isn't practical. Any hint? Edit: I'm on Windows 7, and I'd like some generic solution if exists (not software dependent)

    Read the article

  • OpenOffice - make autocorrect keep the original letter case

    - by houbysoft
    I use OpenOffice to write in about 5 languages, using the US keyboard only. I therefore make extensive use of the autocorrect feature to add accents and the like automatically. The problem is that OpenOffice insists on ignoring the letter case I use, and instead it always uses that which I used when setting up the autocorrect. For example, now when I type, in French, "D'apres", it gets converted to "d’après" instead of "D’après". Is there a way to tell OpenOffice not to change the letter case?

    Read the article

  • Mounting case-sensitive shared folder in VirtualBox

    - by rhettg
    I have an OSX host with a ubuntu VM trying to mount a shared folder. I'm using the options: sudo mount -t vboxsf -o uid=1000,gid=100 pg ~/pg-host The folder mounts fine, however it appears that the mounted directory is case-insensitive, even though my OSX drive is formatted case-sensitive. Are there any options to control this behavior ?

    Read the article

  • Cufon text z-index (IE6 and IE7 stylish select box bug)

    - by Adrian
    Hello, I'm replacing some text with cufon on my page and I'm using jquery stylish select box plugin (http://www.scottdarby.com/plugins/stylish-select/0.4/) to style select boxes. The problem is that, in IE6 and IE7, select box goes under the cufonized text. (the select box is actually an <ul>). Anyone had this issue before? Thanks, Adrian

    Read the article

  • Multiple select statement in stored procedure

    - by GigaPr
    Hi, i have a stored procedure that has to retrieve data from multiple tables something like SELECT [AppointmentId] ,[ContactId] ,[Date] ,[BookedBy] ,[Details] ,[Status] ,[Time] ,[Type] ,[JobId] ,[AppointmentFor] ,(Select PersonFirstName from Person where Person_Id = [AppointmentFor]) As UserFirstName ,(Select PersonLastName from Person where Person_Id = [AppointmentFor]) As UserLastName ,(Select PersonFirstName from Person where Person_Id = [ContactId]) As ContactFirstName ,(Select PersonLastName from Person where Person_Id = [ContactId]) As ContactLastName FROM [dbo].[Appointments] my question is there is any other more efficient way to do this? Or is this the right approach? I am working on a Sql server 2008 Thanks

    Read the article

  • Incorrect syntax near the keyword 'select' while execuing query

    - by sam
    I am getting Incorrect syntax near the keyword 'select' after executing the following code. declare @c int SELECT @c = COUNT(*) FROM (select id, max(date_stored) from table B INNER JOIN table P ON B.id = P.id where id = 3) select @c I want to select total no of records having max stored dates in database. Can any one plz tell what I am doing wrong

    Read the article

  • Display a variable when a select box option is selected

    - by user782104
    For example, the select box <select><option selected="" value="">Please Select</option><option value='txt'>Text</option><option value='int'>Numbers</option><option value='bool' >Boolean</option></select> has a string $messageList=array ( 'txt'=>'text message', 'int'=>'int message','bool'=>'bool message'); What i would like to achieve is to display correspond message when the optition is select?

    Read the article

  • Cannot resolve collation conflict in Union select

    - by phenevo
    Hi, I've got tqo queries: First doesn't work: select hotels.TargetCode as TargetCode from hotels union all select DuplicatedObjects.duplicatetargetCode as TargetCode from DuplicatedObjects where DuplicatedObjects.objectType=4 because I get error: Cannot resolve collation conflict for column 1 in SELECT statement. Second works: select hotels.Code from hotels where hotels.targetcode is not null union all select DuplicatedObjects.duplicatetargetCode as Code from DuplicatedObjects where DuplicatedObjects.objectType=4 Structure: Hotels.Code -PK nvarchar(40) Hotels.TargetCode - nvarchar(100) DuplicatedObjects.duplicatetargetCode PK nvarchar(100)

    Read the article

  • observing multiple select menus with prototype

    - by snaken
    Hi, I want to observe multiple select menus and respond to their changes using prototype but only the first menu seems to be observed. This is my code: $('product_options').select('select').invoke("observe","change",optchange); If there are - for example - 3 selects within product_options then it only observes the first, i thought it might be because of invoke so i then tried this: $('product_options').select('select').each(function(sel){ $(sel).observe("change",optchange); }); Still doesnt work though, any ideas whats wrong?

    Read the article

  • SQL SERVER – SELECT INTO with FileGroup or Partitionis Not Possible

    - by Pinal Dave
    The other day, I received an email from user and after a long time before I answer the question, I had to check the answer online. Here is the question - I want to create a new table based from old table, but when I execute following script it gives me an error. Is there anything I am missing in my syntax? SELECT *  INTO NewTableName ON MyFileGroup FROM MyOldTableName I faintly remember that this was not possible in earlier version of SQL Server but I was not sure if this feature was added in the recent versions or not. I quickly tried few syntaxes and referred online documentation and learned that it is still not possible in the latest version of SQL Server. The alternative is to just go ahead and change the default filegroup of any new table with following script. Though, I do not like change the default filegroup for new tables. It is possible that when I have changed the default filegroup some other code executes behind the scene by automated system or my colleague, it will be also created on new filegroup. ALTER DATABASE DatabaseName MODIFY FILEGROUP NameofFileGroup DEFAULT The reason this feature is not supported is that SELCT INTO is minimally logged operation. I seriously hope that some day in the future this feature get added in. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Filegroup

    Read the article

  • Impossible to select folders and files with mouse (Ubuntu 12.04)

    - by François
    First-time post for me here (after being a regular reader for two years though) so thank you all for the quality of replies and help provided. My problem is very simple apparently but a tricky one. I just installed the Ubuntu 12.04(1) along with the Gnome3-shell environment on my new pc desktop Acer Aspire X3995 (see config below). Everything work (more or less) so far (I still have problems of sound and disabled 2-fingers gestures with my screen -- which I will have to deal with xconfig settings I think -- though), but the main problem is that I cannot select files/folders with my USB mouse. When I try to double click on them, nothing happen (sometimes one folder or file is selected but then unselected again). Note that the navigation works perfectly from the USB keyboard and from the touch-screen (I am using a 23" wide touch-screen Acer Monitor T231Hbmid). Also, the mouse works perfectly with other menu navigation, with the only difference that the text of certain menus is selected as if I was holding the left click on them. So I assume the problem is only related to the mouse. Needless to say that the usual basic hardware checks have been performed (unplugging, powered-off, etc.). My level is simply "advanced user", meaning that if you provide me with intelligible input I should find my way, but please don't expect too much technical/specific knowledge... :) Please let me know if you need more information on this bug. Now, fingers crossed... and thanks in advance! Ciao, François Config of Acer Aspire X3995: Ubuntu 12.04 / Gnome3-shell environment / Intel Core i5 3450 / nVidia GeForce 605, 1Gb. Screen: Acer Monitor TFT 23" wide T231Hbmid

    Read the article

  • Need theoretical help, how to comprehend an if-else dependency net

    - by macbie
    I am going to face a following issue: I'm writing a program that manages some properties, some of them are general and some are specific. Each property is a pair of key and value, and for example: if it is given a general property and other specific property with exactly the same key and value has been existed before then the general property will swap the specific one in the register. If there are two the same general properties - both will remain in the register. And so on; it is like a net of dependencies. In my case I can handle with it intuitively and foresee all cases, but only because the system is not too vast. What if it would? I have met such problems a few times in many different programs and languages (i.e working with C semaphores) and my question is: How to approach this kind of problem? Is this connected with finite state machine, graph theory or something similar? How to be sure that I have considered the whole system and each possible case? Could you recommend some resources (books, sites) to learn from?

    Read the article

  • Allowing client to select data to return via REST interface

    - by CMP
    I have a rest service that is essentially a proxy to a variety of other services. So if I call GET /users/{id} It will get their user profile, as well as order history, and contact info, etc... all from various services, and aggregates them into one nice object. My problem is that each call to a different service has the potential to add time to the original request, so we would rather not get ALL the data ALL of the time if a particular client does not care about all of the pieces. A solution I have arrived at is to do something like this: GET /users/{id}?includeOrders=true&includeX=true&includeY=true... That works, and it allow me to do only what I need to, but it is cumbersome. We have added enough different data sources that there are too many parameters for that style to be useful. I could do something similar with a single integer and a bitmask or something, but that only makes it harder to read, and it does not feel very Restful. I could break it down into multiple calls so they would need to call /users/{id}/orders and /users/{id}/profile separately, but that sort of defeats the purpose of an aggregating proxy, who's purpose is to make clients jobs easier. Are there any good patterns that can help me return just enough data for each client, without making it too difficult for them to filter and select what they want?

    Read the article

  • Visual Studio Code Metrics and the Maintainability index of switch case

    - by pee2002
    Hi there! As a person who loves to follow the best practices, If i run code metrics (right click on project name in solution explorer and select "Calculate Code Metrics" - Visual Studio 2010) on: public static string GetFormFactor(int number) { string formFactor = string.Empty; switch (number) { case 1: formFactor = "Other"; break; case 2: formFactor = "SIP"; break; case 3: formFactor = "DIP"; break; case 4: formFactor = "ZIP"; break; case 5: formFactor = "SOJ"; break; } return formFactor; } It Gives me a Maintainability index of 61 (of course this is insignificant if you have only this, but if you use an utility like class whos philosophy is doing stuff like that, your utility class will have the maintainability index much worst..) Whats the solution for this?

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >