Search Results

Search found 13099 results on 524 pages for 'front row'.

Page 11/524 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • DataGrid row and MVVM

    - by Titan
    Hi, I have a wpf datagrid with many rows, each row has some specific behaviors like selection changed of column 1 combo will filter column 2 combo, and what is selected in row 1 column 1 combo cannot be selected in row 2 column 1 combo, etc... So I am thinking of having a view model for the main datagrid, and another for each row. Is that a good MVVM implementation? It is so that I can handle each row's change event effectively. Question is, how do I create "each row" as a user control view? within the datagrid. I want to implement something like this: <TreeView Padding="0,4,12,0"> <controls:CommandTreeViewItem Header="Sales Orders" Command="{Binding SelectViewModelCommand}" CommandParameter="Sales Orders"/> </TreeView> Where instead of a TreeView I want a datagrid, and instead of controls:CommandTreeViewItem a datagrid row in WPF. Thanks in advance.

    Read the article

  • mysqli returns only one row instead of multiple rows

    - by Tristan
    Hello, i'm totally new to mysqli and i took a generated code and adapted it for my need. public function getServeurByName($string) { $stmt = mysqli_prepare($this->connection, "SELECT DISTINCT * FROM $this->tablename where GSP_nom=?"); $this->throwExceptionOnError(); mysqli_stmt_bind_param($stmt, 's', $string); $this->throwExceptionOnError(); mysqli_stmt_execute($stmt); $this->throwExceptionOnError(); mysqli_stmt_bind_result($stmt, $row->idServ, $row->timestamp, ........... ........... $row->email); if(mysqli_stmt_fetch($stmt)) { $row->timestamp = new DateTime($row->timestamp); return $row; } else { return null; } } the problem, this example i took the template on returns only one row instead of all the records. how to fix that please ?

    Read the article

  • ASP.Net - Gridview row selection and effects

    - by Clint
    I'm using the following code in attempt to allow the user to select a gridview row by clicking anywhere on the row (plus mouse over and out) effects. The code doesn't seem to be applied on rowdatabound and I can't break into the event. (It is wired). The control is in a usercontrol, that lives in a content page, which has a masterpage. protected void gvOrderTypes_RowDataBound(object sender, GridViewRowEventArgs e) { GridView gvOrdTypes = (GridView)sender; //check the item being bound is actually a DataRow, if it is, //wire up the required html events and attach the relevant JavaScripts if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes["onmouseover"] = "javascript:setMouseOverColor(this);"; e.Row.Attributes["onmouseout"] = "javascript:setMouseOutColor(this);"; e.Row.Attributes["onclick"] = Page.ClientScript.GetPostBackClientHyperlink(gvOrdTypes, "Select$" + e.Row.RowIndex); } }

    Read the article

  • Weird Animation when inserting a row in UITableView

    - by svveet
    I have tableview which holds a list of data, when users press "Edit" i add an extra row at the bottom saying "add new" - (void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; NSArray *paths = [NSArray arrayWithObject: [NSIndexPath indexPathForRow:1 inSection:0]]; if (editing) { [[self tableView] insertRowsAtIndexPaths:paths withRowAnimation:UITableViewRowAnimationTop]; } else { [[self tableView] deleteRowsAtIndexPaths:paths withRowAnimation:UITableViewRowAnimationTop]; } } and of course the UITableView transform with animation, but weird thing is the row before the row that i just added, has different animation than all others. all rows perform "slides in" animation, but that 2nd last one did a "fade in" animation. and i didnt set any animation on the 2nd last row (or any other row), and if i didnt add the new row in, the animation slides in as normal when i switch to editing mode. somehow i cant find an answer, i check with contact app on my phone, it didnt have that weird animation as i have, when they adding a new row on editing mode. Any help would be appreciated. Thanks

    Read the article

  • Horizontally spacing a row of images evenly

    - by Tesla
    I have a few rows of images like so <div class="row"> <img src="image.jpg" alt=""> <img src="image.jpg" alt=""> <img src="image.jpg" alt=""> <img src="image.jpg" alt=""> <img src="image.jpg" alt=""> </div> Each image has a different width, and there is also a different number of images on each row (4-6). I want to space the images evenly in the row, the row has a fixed width of 960px. I could do this by calculating the total empty space for each row and then dividing it among the images for a margin, but I was hoping there was something simpler that I could apply to every row instead of having to calculate and code a separate one for each row.

    Read the article

  • Apache2, FastCGI, PHP-FPM, APC on virtualmin panel with nginx front end reverse proxy

    - by Ünsal Korkmaz
    My dream setup: php 5.3.6 + mysql 5.5.10 on Apache2, FastCGI, PHP-FPM, APC with nginx 1.0 front end reverse proxy. And as free server management panel: Virtualmin GPL on centos 5.6 In a new centos 5.6 setup. Using this code for installing virtualmin: wget http://software.virtualmin.com/gpl/scripts/install.sh chmod +x install.sh ./install.sh After setup, i see php is 5.1 and mysql is 5.0 version. And system not supporting php-fpm but supporting fcgid wrapper. I did following changes: wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1.0-6.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh ius-release*.rpm epel-release*.rpm yum install yum-plugin-replace yum remove mysql.i386 yum replace mysql --replace-with mysql55 service mysqld restart chkconfig mysqld on mysql_upgrade --password=1234 yum replace php --replace-with php53u yum install php53u-fpm php53u-pecl-apc service httpd restart chkconfig php-fpm on service php-fpm start I am not sure why virtualmin installing both mysql.i386 and 64 bit version together but needed to remove one of them for using yum replace. So i had php 5.3.6 + mysql 5.5.10 with PHP-FPM, APC installed. But virtualmin not supporting PHP-FPM + fastcgi and its still running on fcgid. I am ultra newbie on server management so i couldnt find workaround after this. I want to switch fcgid wrapper to PHP-FPM + fastcgi at least for 1 virtual server. And if i can find a fix for this section, i want to setup nginx 1.0 as front end reverse proxy for serving static files and passing php files to apache. http://nginxcp.com/ is what i want but its for cpanel.

    Read the article

  • SharePoint solution package not deploying to all front-ends

    - by Alex
    I have a WSP that contains a web part. It's being built using WSPBuilder. Most of the time, the WSP deploys perfectly. However, in two of our test environments (and sadly, in production, too) the WSP doesn't deploy properly to all the web front ends. The assemblies make it into the GAC, and the .webpart files get provisioned. The problem is that a tool part that the web part relies on for configuration simply fails to appear. I've determined that every time this has happened, it has been isolated to a single web front end. I've been able to resolve the issue by doing an stsadm -o deploysolution to re-deploy the solution, and in one instance it was resolved by the end user deactivating/reactivating the feature. Unfortunately, though, this has made it impossible to determine if the control isn't being deployed properly, or if it's some other issue. Any thoughts on this? Could it be a problem with the WSP, or is it likely to be environmental?

    Read the article

  • hp DL380 G4 won't boot with disk plugged into front USB

    - by Kev
    We outgrew a few older external USB backup drives, and purchased WD My Passport 1 TB USB 3.0 drives to replace them. When they are plugged into the front of our G4, it will blink forever after the BIOS (which is current, BTW) and never boot, even though the USB disks are not "bootable" per se. Our old drives did not exhibit this behaviour (so I don't think it's this type of issue that I've read about other servers.) The old drives were USB 2.0, but this shouldn't make a difference, AFAICT--the specs say all of the G4's USB ports are the same, 2.0, anyway, so I'm not sure how one port would handle a USB 3.0 device better than another. If we plug the new drives in one of the back slots, it boots fine. What's the cause? My concern is that the front USB port, and possibly the motherboard, might be starting to die. (We are experiencing other strange issues with them, or were initially, like intermittent file permissions errors despite wide-open ACL on these local drives, but some serverfault users have me convinced they may be coincidental software/security related issues.)

    Read the article

  • Plug-in device to front USB computer *sometimes* restarts

    - by Mark A. Nicolosi
    I've got a strange problem that very occasionally (maybe once a month) when I plug-in something to the front USB on my computer, the computer suddently restarts. This also happens when I touch the front USB ports sometimes. This has been going on for a few years and a lot of the components in my PC have changed. I thought it was my home wiring, but I moved last year and it still happened. I thought maybe it was the motherboard, but that was upgraded 9 months ago and it still happens. I thought it was my case, but I changed that recently and it still happens. I thought maybe it was my PSU, but I upgraded that yesterday and it still happens. I'm pretty sure this is an electro-static thing, but I thought that modern computers have protection against this sort of thing. Maybe I should move my case off the floor (carpet) and stop wearing songs all the time. Edit: Just to clarify this is a computer that I built. The components have been upgraded throughout the years and it's not much the same computer anymore. This doesn't happen very often, but it is annoying, because I don't know what the cause is. Anyone have any ideas?

    Read the article

  • Why can't I fade out this table row in IE using jQuery?

    - by George Edison
    I can't get the table row to fade out in IE. It works in Chrome, but not IE. It just becomes really 'light' and stays on the screen. I tried IE8 with and without compatibility mode. <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function hideIt() { $('#hideme').fadeTo("slow", 0.0); } </script> </head> <body> <table> <tr id='hideme'> <td>Hide me!</td> </tr> </table> <button onclick='hideIt();'>Hide</button> </body> </html> Is there a workaround/solution for a smooth fade?

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Row index of pickerview is not displaying correctly.

    - by iSharreth
    I had used images in pickerview. But the row number is not displaying correctly. Anyone please help. I used the below code: - (UIView *)pickerView:(UIPickerView *)pickerView viewForRow:(NSInteger)row forComponent:(NSInteger)component reusingView:(UIView *)view { NSLog(@"Row : %i",row); return [customPickerArray objectAtIndex:row]; }

    Read the article

  • Change row/column span programatically (tablelayoutpanel)

    - by alex
    I have a tablelayoutpanel. 2x2 - 2 columns 2 rows. For example, I added a button button1 in a 1 row, second column. button1 has a dock property set to Fill. VS Designer allows to set column/row span properties of button1. I want an availability to change row span property of button1 programatically, so it can fill all second column(1 row and second row) and availability to set it back. How ?

    Read the article

  • How to find the row ID from datagridview, given a row value?

    - by user361108
    Hi, Can anyone help me please? I have to find the row number from 1 selected array which I stored in one separate array and randomly I'm trying to get the row number from datagridview http://serv8.enterupload.com/i/00246/aewglahmvpr5.jpg In other words: if I know a column value for a given row in a datagridview (e.g. for this row, FirstName == 'Bud'), how can I get the row ID?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • How can i find touch typing lesson for words with middle row only

    - by user1838032
    I am learning touch typing. i want practice step by step. Is there any site where i can have the options of the keys to select and then have lesson for those slected keys only. I means i select the keys from keyboard and then system prepares the lesson for only those keys with random combination. Current i want to practice keys asdf gh jkl; Now i am not able to find practice for that whole row only. i mena random combinatins

    Read the article

  • Create an Asp.net Gridview with Checkbox in each row

    - by ybbest
    One of the frequent requirements for Asp.net Gridview is to add a checkbox for each row and a checkbox to select all the items like the Gridview below. This can be easily achieved by using jQuery. You can find the complete source doe here. $(document).ready(function () { $(‘input[name$="CDSelectAll"]‘).click(function () { if ($(this).attr(“checked”)) { $(‘input[name$="CDSelect"]‘).attr(‘checked’, ‘checked’); } else { $(‘input[name$="CDSelect"]‘).removeAttr(‘checked’); } }); });

    Read the article

  • Use alpha or opacity on a table row using CSS [migrated]

    - by mserin
    I have a CSS stylesheet for a webpage. The webpage has a table with a background color of white (set in the rows, not the table). I would like to set the opacity or alpha to 50%. I have tried so many variations, but come up with no luck. A typical row in the HTML file is: <tr> <td>&nbsp;</td> <td>Twitter</td> </tr> The CSS settings for table rows (which works perfectly) is: tr { font-family: Arial, Helvetica, sans-serif; background:rgb(255,255,255); } To get the alpha, I tried tr { font-family: Arial, Helvetica, sans-serif; background-color:rgba(255,255,255,0.5); } I have also tried background-color-opacity: 0.5; Any other suggestions?

    Read the article

  • SSIS Design Pattern: Producing a Footer Row

    - by andyleonard
    The following is an excerpt from SSIS Design Patterns (now available in the UK!) Chapter 7, Flat File Source Patterns. The only planned appearance of all five authors presenting on SSIS Design Patterns is the SSIS Design Patterns day-long pre-conference session at the PASS Summit 2012 . Register today . Let’s look at producing a footer row and adding it to the data file. For this pattern, we will leverage project and package parameters. We will also leverage the Parent-Child pattern, which will be...(read more)

    Read the article

  • It's Time to Chart Your Course with Oracle HCM Applications - Featuring Row Henson

    - by jay.richey
    Total human capital costs average nearly 70% of operating expenses. There's never been more pressure on HR professionals to deliver mission-critical programs to retain rising stars, develop core performers, and cut costs from workforce operations. Join Row Henson, Oracle HCM Fellow, and Scott Ewart, Senior Director of Product Marketing, to find out: What real-world strategies HR practitioners and experts are prioritizing today to optimize their investment in people Why nearly two-thirds of PeopleSoft and E-Business Suite HCM customers have chosen to upgrade to the latest releases and where Oracle's strategic product roadmaps are headed How Fusion HCM will introduce a new standard for work and innovation - not only for the HR professional, but for every single employee and manager in your business Date: January 20th, 2011 at 9:00 PST / 12:00 EST Register now!

    Read the article

  • Returning multiple results from one row [migrated]

    - by krock
    I have a set of MySQL data similar to the following: | id | type | start | end | ============================================= | 1 | event | 2011-11-01 | 2012-01-02 | | 2 | showing | 2012-11-04 | 2012-11-04 | | 3 | conference | 2012-12-01 | 2012-12-04 | | 4 | event | 2012-01-01 | 2012-01-01 | I want to retrieve events within a certain date range, but I also want to return individual results for each row that has a time span of more than one day. What's the best way to achieve this? Using PHP is a possibility, but I wanted to avoid doing this because I will need to to take pagination into account, etc. Any ideas would be greatly appreciated.

    Read the article

  • Cursor-Killing: Accessing Data in the Next Row

    Cursors are considered by many to be the bane of good T-SQL. What are the best ways to avoid iterative T-SQL and to write queries that look and perform beautifully? This first part in an ongoing series of cursor-killing handles inter-row analysis. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Row Versioning Concurrency in SQL Server

    The optimistic concurrency model assumes that several concurrent transactions can usually complete without interfering with each other, and therefore do not require draconian locking on the resources they access. SQL Server 2005, and later, implements a form of this model called row versioning concurrency. It works by remembering the value of the data at the start of the transaction and checking that no other transaction has modified it before committing. If this optimism is justified for the pattern of activity within a database, it can improve performance by greatly reducing blocking. Kalen Delaney explains how it works in SQL Server.

    Read the article

  • How to configure Apache to run in front of IIS (.net+php)

    - by Chris
    Hi So basically I have IIS6 running a .net application, running on port 80. What I want is to insert php code in this application. I found on a forum someone suggesting to run Apache+php in front of the IIS server so, like this, the .net code will be interpreted by the IIS server and after that the result will be passed to Apache that will interpret the php code. Is it possible? If yes then how to configure the Apache server? Thanks guys :)

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >