Search Results

Search found 235 results on 10 pages for 'leaf'.

Page 7/10 | < Previous Page | 3 4 5 6 7 8 9 10  | Next Page >

  • XML databinding to TreeView (or Tab control), bind attribute based on different attribute

    - by Number8
    Hello, I have some xml: <Test> <thing location="home" status="good"/> <thing location=work" status="bad"/> <thing location="mountains" status="good"/> </Test> The leaves on the TreeView are the values of the status attribute; the nodes will be the value of the location attribute. +--bad ¦.....+--work +--good .......+--home .......+--mountains Currently, I populate the TreeView (or Tab control) manually, iterating through the xml, adding the nodes to the appropriate leaf. Can this be done via databinding? I'm guessing a Converter will be involved... Thanks for any advice.

    Read the article

  • django admin how to limit selectbox values

    - by SledgehammerPL
    model: class Store(models.Model): name = models.CharField(max_length = 20) class Admin: pass def __unicode__(self): return self.name class Stock(Store): products = models.ManyToManyField(Product) class Admin: pass def __unicode__(self): return self.name class Product(models.Model): name = models.CharField(max_length = 128, unique = True) parent = models.ForeignKey('self', null = True, blank = True, related_name='children') (...) def __unicode__(self): return self.name mptt.register(Product, order_insertion_by = ['name']) admin.py: from bar.drinkstore.models import Store, Stock from django.contrib import admin admin.site.register(Store) admin.site.register(Stock) Now when I look at admin site I can select any product from the list. But I'd like to have a limited choice - only leaves. In mptt class there's function: is_leaf_node() -- returns True if the model instance is a leaf node (it has no children), False otherwise. But I have no idea how to connect it I'm trying to make a subclass: in admin.py: from bar.drinkstore.models import Store, Stock from django.contrib import admin admin.site.register(Store) class StockAdmin(admin.ModelAdmin): def queryset(self, request): qs = super(StockAdmin, self).queryset(request).filter(ihavenoideawhatfilter) admin.site.register(Stock, StockAdmin) but I'm not sure if it's right way, and what filter set.

    Read the article

  • What is the "official" place for community support for the Mere Mortals .NET framework?

    - by Ryan Hayes
    My team is using the Mere Mortals .NET framework from Oak Leaf. Being used to working with primarily open source software, I found it excruciatingly painful to find ANY community support for MM.NET. When I asked if there was any, the only place I was given to look for support was Universal Thread, which is a site which requires a membership for search and archived questions. It seems like a third party, pay-for site should not be the primary source of support for anything like this, especially MM.NET which costs $700 per developer. It doesn' to me like an entire community around MM.NET would choose to all pay on top of the license just to use a forum. If not Universal Thread, then what is the "official" place to find support for the Mere Mortals .NET framework?

    Read the article

  • Calculating depth and descendants of tree

    - by yuudachi
    Can you guys help me with the algorithm to do these things? I have preorder, inorder, and postorder implemented, and I am given the hint to traverse the tree with one of these orders. I am using dotty to label (or "visit") the nodes. Depth is the number of edges from the root to the bottom leaf, so everytime I move, I add +1 to the depth? Something like that? No idea about the algorithm for descendants. They are asking about the number of nodes a specific node has under itself. These are normal trees btw.

    Read the article

  • postgresql syntax while exists loop

    - by veilig
    I'm working at function from Joe Celkos book - Trees and Hierarchies in SQL for Smarties I'm trying to delete a subtree from an adjacency list but part my function is not working yet. WHILE EXISTS –– mark leaf nodes (SELECT * FROM OrgChart WHERE boss_emp_nbr = -99999 AND emp_nbr > -99999) LOOP –– get list of next level subordinates DELETE FROM WorkingTable; INSERT INTO WorkingTable SELECT emp_nbr FROM OrgChart WHERE boss_emp_nbr = -99999; –– mark next level of subordinates UPDATE OrgChart SET emp_nbr = -99999 WHERE boss_emp_nbr IN (SELECT emp_nbr FROM WorkingTable); END LOOP; my question: is the WHILE EXISTS correct for use w/ postgresql? I appear to be stumbling and getting caught in an infinite loop in this part. Perhaps there is a more correct syntax I am unaware of.

    Read the article

  • Find all paths from root to leaves of tree in Scheme

    - by grifaton
    Given a tree, I want to find the paths from the root to each leaf. So, for this tree: D / B / \ A E \ C-F-G has the following paths from root (A) to leaves (D, E, G): (A B D), (A B E), (A C F G) If I represent the tree above as (A (B D E) (C (F G))) then the function g does the trick: (define (g tree) (cond ((empty? tree) '()) ((pair? tree) (map (lambda (path) (if (pair? path) (cons (car tree) path) (cons (car tree) (list path)))) (map2 g (cdr tree)))) (else (list tree)))) (define (map2 fn lst) (if (empty? lst) '() (append (fn (car lst)) (map2 fn (cdr lst))))) But this looks all wrong. I've not had to do this kind of thinking for a while, but I feel there should be a neater way of doing it. Any ideas for a better solution (in any language) would be appreciated.

    Read the article

  • what is a performance way to 'tree-walking' through my Entity Framework data

    - by Greg
    Hi, I have a Entity Framework design with a few tables that define a "graph". So there can be a large chain of relationships between objects in the few tables via concept of parent/child relationships. What is a performance way to 'tree-walking' through my Entity Framework data? That is I assume I wouldn't want to load the full set of all NODES and RELATIONSHIPS from the database for the purpose of walking the tree, where the end result may only be identifying leaf nodes? Or would this be OK with the way lazy loading may work at the column/parameter level? Else how could I load just the skeleton of the objects and then when needing to refer to any attributes have them lazy load then?

    Read the article

  • JXTreeTable and BorderHighlighter Drawing Border on All Rows

    - by Kevin Rubin
    I'm using a BorderHighlighter on my JXTreeTable to put a border above each of the table cells on non-leaf rows to give a more clear visual separator for users. The problem is that when I expand the hierarchical column, all cells in the hierarchical column, for all rows, include the Border from the Highlighter. The other columns are displaying just fine. My BorderHighlighter is defined like this: Highlighter topHighlighter = new BorderHighlighter(new HighlightPredicate() { @Override public boolean isHighlighted(Component component, ComponentAdapter adapter) { TreePath path = treeTable.getPathForRow(adapter.row); TreeTableModel model = treeTable.getTreeTableModel(); Boolean isParent = !model.isLeaf(path.getLastPathComponent()); return isParent; } }, BorderFactory.createMatteBorder(2, 0, 0, 0, Color.RED)); I'm using SwingX 1.6.5.

    Read the article

  • CheckedState inconsisten in Treeview using Find Method

    - by M Murphy
    Hello, I'm using a Treeview control in a .NET 3.5 c# project and I'm noticing inconsistencies in the checked property when I use the find method of the Treeview control. I'll check some leaves (nodes with no children) and then click a button. Inside the button click I'm using the find method of the treeview control to locate the node and interrogate the value of the checked property. On screen the leaf will be checked but according to the checked property of the node returned from the Find method its not checked. Has anyone else run into this? Thanks

    Read the article

  • Right rotate of tree in Haskell: how is it work?

    - by Roman
    I don't know haskell syntax, but I know some FP concepts (like algebraic data types, pattern matching, higher-order functions ect). Can someone explain please, what does this code mean: data Tree ? = Leaf ? | Fork ? (Tree ?) (Tree ?) rotateR tree = case tree of Fork q (Fork p a b) c -> Fork p a (Fork q b c) As I understand, first line is something like Tree-type declaration (but I don't understand it exactly). Second line includes pattern matching (I don't understand as well why do we need to use pattern matching here). And third line does something absolutely unreadable for non-haskell developer. I've found definition of Fork as fork (f,g) x = (f x, g x) but I can't move further anymore.

    Read the article

  • Find directories not containing a specific directory

    - by Morgan ARR Allen
    Been searching around for a bit and cannot find a solution for this one. I guess I'm looking for a leaf-directory by name. In this example I'd like to get a list of directories call 'modules' that do NOT have a subdirectory called module. modules/package1/modules/spackage1 modules/package1/modules/spackage2 modules/package1/modules/spackage3/modules modules/package1/modules/spackage3/modules/spackage1 modules/package2/modules/ The list I desire would contain modules/package1/modules/spackage3/modules/ modules/package2/modules/ All the directories named module that do not have a subdirectory called module I started with trying something this with no luck find . -name modules \! -exec sh -c 'find -name modules' \; -exec works on exit code, okay lets pass the count as exit code find . -name modules -exec sh -c 'exit $(find {} -name modules|grep -n ""|tail -n1|cut -d: -f1)' \; This should take the count of each subdirectory called modules and exit with it. No such love.

    Read the article

  • Implementation of a distance matrix of a binary tree that is given in the adjacency-list representation

    - by Denise Giubilei
    Given this problem I need an O(n²) implementation of this algorithm: "Let v be an arbitrary leaf in a binary tree T, and w be the only vertex connected to v. If we remove v, we are left with a smaller tree T'. The distance between v and any vertex u in T' is 1 plus the distance between w and u." This is a problem and solution of a Manber's exercise (Exercise 5.12 from U. Manber, Algorithms: A Creative Approach, Addison-Wesley (1989).). The thing is that I can't deal properly with the adjacency-list representation so that the implementation of this algorithm results in a real O(n²) complexity. Any ideas of how the implementation should be? Thanks.

    Read the article

  • R matrix handling expressions aren't evaluated in a function or a "for" loop - Column extract doesn't seem to work

    - by Sal Leggio
    I have an R matrix named ddd. When I enter this, everything works fine: i <- 1 shapiro.test(ddd[,y]) ad.test(ddd[,y]) stem(ddd[,y]) print(y) The calls to Shapiro Wilk, Anderson Darling, and stem all work, and extract the same column. If I put this code in a "for" loop, the calls to Shapiro Wilk, and Anderson Darling stop working, while the the stem & leaf call and the print call continue to work. for (y in 7:10) { shapiro.test(ddd[,y]) ad.test(ddd[,y]) stem(ddd[,y]) print(y) } The decimal point is 1 digit(s) to the right of the | 0 | 0 0 | 899999 1 | 0 [1] 7 The same thing happens if I try and write a function. SW & AD do not work. The other calls do. D <- function (y) { + shapiro.test(ddd[,y]) + ad.test(ddd[,y]) + stem(ddd[,y]) + print(y) } D(9) The decimal point is at the | 9 | 000 9 | 10 | 00000 [1] 9 Why don't all the calls behave the same way?

    Read the article

  • How to hash and check for equality of objects with circular references

    - by mfya
    I have a cyclic graph-like structure that is represented by Node objects. A Node is either a scalar value (leaf) or a list of n=1 Nodes (inner node). Because of the possible circular references, I cannot simply use a recursive HashCode() function, that combines the HashCode() of all child nodes: It would end up in an infinite recursion. While the HashCode() part seems at least to be doable by flagging and ignoring already visited nodes, I'm having some troubles to think of a working and efficient algorithm for Equals(). To my surprise I did not find any useful information about this, but I'm sure many smart people have thought about good ways to solve these problems...right? Example (python): A = [ 1, 2, None ]; A[2] = A B = [ 1, 2, None ]; B[2] = B A is equal to B, because it represents exactly the same graph. BTW. This question is not targeted to any specific language, but implementing hashCode() and equals() for the described Node object in Java would be a good practical example.

    Read the article

  • Missing something with Reader monad - passing the damn thing around everywhere

    - by Richard Huxton
    Learning Haskell, managing syntax, have a rough grasp of what monads etc are about but I'm clearly missing something. In main I can read my config file, and supply it as runReader (somefunc) myEnv just fine. But somefunc doesn't need access to the myEnv the reader supplies, nor do the next couple in the chain. The function that needs something from myEnv is a tiny leaf function. So - how do I get access to the environment in a function without tagging all the intervening functions as (Reader Env)? That can't be right because otherwise you'd just pass myEnv around in the first place. And passing unused parameters through multiple levels of functions is just ugly (isn't it?). There are plenty of examples I can find on the net but they all seem to have only one level between runReader and accessing the environment.

    Read the article

  • detect where a redirect wants to go with jQuery

    - by Markus
    Hi there, I have a page with a flash plugin. If I'm on that page, I don't want to post a message if the user wont to leaf it without saving it...: if(/containers/.test(document.location.href)){ window.onbeforeunload = function(){ return exit_text; }; } This works fine, except that I if i save the content in the flash plugin. In that case I don't want to show up that message. How can I detect the url within the onbeforeunload method? That way I can descide when to post the method! Thanks Markus

    Read the article

  • how to read csv file in jquery using codeigniter framework

    - by webghost
    suppose this is my csv file fileempId,lastName,firstName,middleName,street1,street2,city,state,zip,gender,birthDate,ssn,empStatus,joinDate,workStation,location,custom1,workState,salary,payFrequency,FITWStatus,FITWExemptions,DD1Routing,DD1Account,DD1Amount,DD1AmountCode,DD1Checking,DD2Routing,DD2Account,DD2Amount,DD2AmountCode,DD2Checking 1,Dela Cruz,Juano,Santos,,,,,,1,,,Part Time Internship,, asd Division, Makati,one, asd,150,Bi Weekly,Not Applicable,100,,,,,,1234,9876,100,SAVINGS,BLANK 3,Palogan,Ralph,,,,,,,1,11-Mar-11,,Full Time Contract,2-Mar-11, sdf Department, pasay,, ,,,Not Applicable,,,,,,,,,,, 5,San,Goku,,,,hidden leaf,,,1,11-Mar-11,,,,,,,,,,Not Applicable,0,,,,,,,,,, this is my form <label>Choose File:</label><font color="#FF0000">*</font> <input type="file" name="file" id="file" /> <input type="button" id="importButton" value="Import" name="importButton" /> how to read the data in csv and store it to mysql database(codeigniter)? Any example code on how to do it,.

    Read the article

  • Python - Why ever use SHA1 when SHA512 is more secure?

    - by orokusaki
    I don't mean for this to be a debate, but I'm trying to understand the technical rationale behind why so many apps use SHA1, when SHA512 is more secure. Perhaps it's simply for backwards compatibility. Besides the obvious larger size (128 chars vs 40), or slight speed differences, is there any other reason why folks use the former? Also, SHA-1 I believe was first cracked by a VCR's processor years ago. Has anyone cracked 512 yet (perhaps with a leaf blower), or is it still safe to use without salting?

    Read the article

  • Next Identity Key LINQ + SQL Server

    - by user569347
    To represent our course tree structure in our Linq Dataclasses we have 2 columns that could potentially be the same as the PK. My problem is that if I want to Insert a new record and populate 2 other columns with the PK that was generated there is no way I can get the next identity and stop conflict with other administrators who might be doing the same insert at the same time. Case: A Leaf node has right_id and left_id = itself (prereq_id) **dbo.pre_req:** prereq_id left_id right_id op_id course_id is_head is_coreq is_enforced parent_course_id and I basically want to do this: pre_req rec = new pre_req { left_id = prereq_id, right_id = prereq_id, op_id = 3, course_id = query.course_id, is_head = true, is_coreq = false, parent_course_id = curCourse.course_id }; db.courses.InsertOnSubmit(rec); try { db.SubmitChanges(); } Any way to solve my dilemma? Thanks!

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10  | Next Page >