Search Results

Search found 3518 results on 141 pages for 'smooth operator'.

Page 39/141 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • How to achieve reliable Gigabit Ethernet Link with my Acer Aspire Revo R3610?

    - by The Operator
    I want to stream HD movies over my wired Gigabit LAN from my PC to my Acer Aspire Revo R3610. It's connected with a 3ft Cat5e patch cable to my Netgear GS605v2 Switch. The PC acting as File Server is connected at 1Gbps to the Switch. Network driver options are set to defaults, including automatic speed/duplex negotiation on both machines. The Revo will not connect to my Network Switch at 1Gbps - the OS reports that it reverts to 100Mbps either shortly after connection or immediately upon connection. Through a process of elimination (trying different drivers, patch cables, ports on the switch, and other 1Gbps-capable devices connected to the Network switch which successfully achieve 1Gbps links and performance) I have drawn the conclusion there is either a Hardware or Software (Driver) issue with the Revo itself. I have performed tests using Windows 7 and Ubuntu 9.10. Can anyone offer insight on Gigabit Ethernet with the Revo?

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • How does a template class inherit another template class?

    - by hkBattousai
    I have a "SquareMatrix" template class which inherits "Matrix" template class, like below: SquareMatrix.h: #ifndef SQUAREMATRIX_H #define SQUAREMATRIX_H #include "Matrix.h" template <class T> class SquareMatrix : public Matrix<T> { public: T GetDeterminant(); }; template <class T> // line 49 T SquareMatrix<T>::GetDeterminant() { T t = 0; // Error: Identifier "T" is undefined // line 52 return t; // Error: Expected a declaration // line 53 } // Error: Expected a declaration // line 54 #endif I commented out all other lines, the files contents are exactly as above. I receive these error messages: LINE 49: IntelliSense: expected a declaration LINE 52: IntelliSense: expected a declaration LINE 53: IntelliSense: expected a declaration LINE 54: error C2039: 'GetDeterminant' : is not a member of 'SquareMatrix' LINE 54: IntelliSense: expected a declaration So, what is the correct way of inheriting a template class? And what is wrong with this code? The "Matrix" class: template <class T> class Matrix { public: Matrix(uint64_t unNumRows = 0, uint64_t unNumCols = 0); void GetDimensions(uint64_t & unNumRows, uint64_t & unNumCols) const; std::pair<uint64_t, uint64_t> GetDimensions() const; void SetDimensions(uint64_t unNumRows, uint64_t unNumCols); void SetDimensions(std::pair<uint64_t, uint64_t> Dimensions); uint64_t GetRowSize(); uint64_t GetColSize(); void SetElement(T dbElement, uint64_t unRow, uint64_t unCol); T & GetElement(uint64_t unRow, uint64_t unCol); //Matrix operator=(const Matrix & rhs); // Compiler generate this automatically Matrix operator+(const Matrix & rhs) const; Matrix operator-(const Matrix & rhs) const; Matrix operator*(const Matrix & rhs) const; Matrix & operator+=(const Matrix & rhs); Matrix & operator-=(const Matrix & rhs); Matrix & operator*=(const Matrix & rhs); T& operator()(uint64_t unRow, uint64_t unCol); const T& operator()(uint64_t unRow, uint64_t unCol) const; static Matrix Transpose (const Matrix & matrix); static Matrix Multiply (const Matrix & LeftMatrix, const Matrix & RightMatrix); static Matrix Add (const Matrix & LeftMatrix, const Matrix & RightMatrix); static Matrix Subtract (const Matrix & LeftMatrix, const Matrix & RightMatrix); static Matrix Negate (const Matrix & matrix); // TO DO: static bool IsNull(const Matrix & matrix); static bool IsSquare(const Matrix & matrix); static bool IsFullRowRank(const Matrix & matrix); static bool IsFullColRank(const Matrix & matrix); // TO DO: static uint64_t GetRowRank(const Matrix & matrix); static uint64_t GetColRank(const Matrix & matrix); protected: std::vector<T> TheMatrix; uint64_t m_unRowSize; uint64_t m_unColSize; bool DoesElementExist(uint64_t unRow, uint64_t unCol); };

    Read the article

  • MySql select by column value. Separeta operator for columns.

    - by andy
    Hi all, i have a mysql table like this +-----+---------+-----------+-----------------+-------+ | id | item_id | item_type | field_name | data | +-----+---------+-----------+-----------------+-------+ | 258 | 54 | page | field_interests | 1 | | 257 | 54 | page | field_interests | 0 | | 256 | 54 | page | field_author | value | +-----+---------+-----------+-----------------+-------+ And, I need build query like this SELECT * FROM table WHERE `field_name`='field_author' AND `field_author.data` LIKE '%jo%' AND `field_name`='field_interests' AND `field_interests.data`='0' AND `field_name`='field_interests' AND `field_interests.data`='1' This is sample query. MySql can't do queries like that. I mean than SELECT * FROM table WHERE name='jonn' AND name='marry' will return 0 rows. Cant anybody help me. Thanks.

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in SQL Server?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking)

    Read the article

  • I (think) I want to use a BItWise Operator to check useraccountcontrol property!

    - by Jim
    Hello, Here's some code: DirectorySearcher searcher = new DirectorySearcher(); searcher.Filter = "(&(objectClass=user)(sAMAccountName=" + lstUsers.SelectedItem.Text + "))"; SearchResult result = searcher.FindOne(); Within result.Properties["useraccountcontrol"] will be an item which will give me a value depending on the state of the account. For instance, a value of 66050 means I'm dealing with: A normal account; where the password does not expire;which has been disabled. Explanation here. What's the most concise way of finding out if my value "contains" the AccountDisable flag (which is 2) Thanks in advance!

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in MSSQL?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking) Thanks, Eoin C

    Read the article

  • how I can overcome this error C2679: binary '>>' : no operator found which takes a right-hand oper

    - by hussein abdullah
    #include <iostream> using std::cout; using std::cin; using std::endl; #include <cstring> void initialize(char[],int*); void input(const char[] ,int&); void print ( const char*,const int); void growOlder (const char [], int* ); bool comparePeople(const char* ,const int*,const char*,const int*); int main(){ char name1[25]; char name2[25]; int age1; int age2; initialize (name1,&age1); initialize (name2,&age2); print(name1,age1); print(name2,age2); input(name1,age1); input(name2,age2); print(name1,age1); print(name2,age2); growOlder(name2,&age2); if(comparePeople(name1,&age1,name2,&age2)) cout<<"Both People have the same name and age "<<endl; return 0; } void input(const char name[],int &age) { cout<<"Enter a name :"; cin>>name ; cout<<"Enter an age:"; cin>>age; cout<<endl; } void initialize ( char name[],int *age) { name[0]='\0'; *age=0; } void print ( const char name[],const int age ) { cout<<"The Value stored in variable name is :" <<name<<endl <<"The Value stored in variable age is :" <<age<<endl<<endl; } void growOlder(const char name[],int *age) { cout<< name <<" has grown one year older\n\n"; *age++; } bool comparePeople (const char *name1,const int *age1, const char *name2,const int *age2) { return(*age1==*age2 && !strcmp(name1,name2)); }

    Read the article

  • operator "new" returning a non-local heap pointer for only one class ?

    - by KaluSingh Gabbar
    Language : C++ Platform : Windows Server 2003 I have an exe calling a DLL, in which when I allocate (new) the memory for class A (which is in DLL) it returns me a non-local heap pointer. I try to new other classes which are in DLL and "new" returns a valid heap pointer for them, its only Class A which is not being allocated properly. I am on windows and validating the heap by this function call : _CrtIsValidHeapPointer ( (const void *) pPtr ) I am seriously confused why this only happens with new-ing Class A and no other class ? (All Native Code)

    Read the article

  • Why is there a sizeof... operator in C++0x?

    - by Motti
    I saw that @GMan implemented a version of sizeof... for variadic templates which (as far as I can tell) is equivalent to the built in sizeof.... Doesn't this go against the design principle of not adding anything to the core language if it can be implemented as a library function[citation needed]?

    Read the article

  • "OR" Operator must be placed at end of previous line? (unexpected tOROP)

    - by akonsu
    I am running Ruby 1.9. This is a valid syntax: items = (data['DELETE'] || data['delete'] || data['GET'] || data['get'] || data['POST'] || data['post']) But this gives me an error: items = (data['DELETE'] || data['delete'] || data['GET'] || data['get'] || data['POST'] || data['post']) t.rb:8: syntax error, unexpected tOROP, expecting ')' || data['GET'] || data['get'] |... ^ Why?!

    Read the article

  • How to make negate_unary work with any type?

    - by Chan
    Hi, Following this question: How to negate a predicate function using operator ! in C++? I want to create an operator ! can work with any functor that inherited from unary_function. I tried: template<typename T> inline std::unary_negate<T> operator !( const T& pred ) { return std::not1( pred ); } The compiler complained: Error 5 error C2955: 'std::unary_function' : use of class template requires template argument list c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 223 1 Graphic Error 7 error C2451: conditional expression of type 'std::unary_negate<_Fn1>' is illegal c:\program files\microsoft visual studio 10.0\vc\include\ostream 529 1 Graphic Error 3 error C2146: syntax error : missing ',' before identifier 'argument_type' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 4 error C2065: 'argument_type' : undeclared identifier c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 2 error C2039: 'argument_type' : is not a member of 'std::basic_ostream<_Elem,_Traits>::sentry' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 222 1 Graphic Error 6 error C2039: 'argument_type' : is not a member of 'std::basic_ostream<_Elem,_Traits>::sentry' c:\program files\microsoft visual studio 10.0\vc\include\xfunctional 230 1 Graphic Any idea? Update Follow "templatetypedef" solution, I got new error: Error 3 error C2831: 'operator !' cannot have default parameters c:\visual studio 2010 projects\graphic\graphic\main.cpp 39 1 Graphic Error 2 error C2808: unary 'operator !' has too many formal parameters c:\visual studio 2010 projects\graphic\graphic\main.cpp 39 1 Graphic Error 4 error C2675: unary '!' : 'is_prime' does not define this operator or a conversion to a type acceptable to the predefined operator c:\visual studio 2010 projects\graphic\graphic\main.cpp 52 1 Graphic Update 1 Complete code: #include <iostream> #include <functional> #include <utility> #include <cmath> #include <algorithm> #include <iterator> #include <string> #include <boost/assign.hpp> #include <boost/assign/std/vector.hpp> #include <boost/assign/std/map.hpp> #include <boost/assign/std/set.hpp> #include <boost/assign/std/list.hpp> #include <boost/assign/std/stack.hpp> #include <boost/assign/std/deque.hpp> struct is_prime : std::unary_function<int, bool> { bool operator()( int n ) const { if( n < 2 ) return 0; if( n == 2 || n == 3 ) return 1; if( n % 2 == 0 || n % 3 == 0 ) return 0; int upper_bound = std::sqrt( static_cast<double>( n ) ); for( int pf = 5, step = 2; pf <= upper_bound; ) { if( n % pf == 0 ) return 0; pf += step; step = 6 - step; } return 1; } }; /* template<typename T> inline std::unary_negate<T> operator !( const T& pred, typename T::argument_type* dummy = 0 ) { return std::not1<T>( pred ); } */ inline std::unary_negate<is_prime> operator !( const is_prime& pred ) { return std::not1( pred ); } template<typename T> inline void print_con( const T& con, const std::string& ms = "", const std::string& sep = ", " ) { std::cout << ms << '\n'; std::copy( con.begin(), con.end(), std::ostream_iterator<typename T::value_type>( std::cout, sep.c_str() ) ); std::cout << "\n\n"; } int main() { using namespace boost::assign; std::vector<int> nums; nums += 1, 3, 5, 7, 9; nums.erase( remove_if( nums.begin(), nums.end(), !is_prime() ), nums.end() ); print_con( nums, "After remove all primes" ); } Thanks, Chan Nguyen

    Read the article

  • Calling different functions depending on the template parameter c++

    - by Noman Javed
    I want to have something like that class A { public: Array& operator()() { . . . } }; class B { public: Element& operator[](int i) { ... } }; template<class T> class execute { public: output_type = operator()(T& t) { if(T == A) Array out = T()(); else { Array res; for(int i=0 ; i < length; ++i) a[i] = t[i]; } } }; There are two issues here 1. Meta-function replacing if-else in the execute operator() 2. Return type of execute operator() Thanks in anticipation Noman

    Read the article

  • Why does the right-shift operator produce a zero instead of a one?

    - by mrt181
    Hi, i am teaching myself java and i work through the exercises in Thinking in Java. On page 116, exercise 11, you should right-shift an integer through all its binary positions and display each position with Integer.toBinaryString. public static void main(String[] args) { int i = 8; System.out.println(Integer.toBinaryString(i)); int maxIterations = Integer.toBinaryString(i).length(); int j; for (j = 1; j < maxIterations; j++) { i >>= 1; System.out.println(Integer.toBinaryString(i)); } In the solution guide the output looks like this: 1000 1100 1110 1111 When i run this code i get this: 1000 100 10 1 What is going on here. Are the digits cut off? I am using jdk1.6.0_20 64bit. The book uses jdk1.5 32bit.

    Read the article

  • insert ... select with divide operator in select errors?

    - by Mark
    Hi, the following query CREATE TABLE IF NOT EXISTS XY ( x INT NOT NULL , y FLOAT NULL , PRIMARY KEY(x) ) INSERT INTO XY (x,y) (select 1 as x ,(1/7) as y); errors with Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO XY (x,y) (select 1 as x ,(1/7) as y)' at line 7 Line 1, column 1 any ideas?

    Read the article

  • Is there a .NET class that represents operator types?

    - by user323774
    I would like to do the following: *OperatorType* o = *OperatorType*.GreaterThan; int i = 50; int increment = -1; int l = 0; for(i; i o l; i = i + increment) { //code } this concept can be kludged in javascript using an eval()... but this idea is to have a loop that can go forward or backward based on values set at runtime. is this possible? Thanks

    Read the article

  • Why does the assignment operator return a value and not a reference?

    - by Nick Lowman
    I saw the example below explained on this site and thought both answers would be 20 and not the 10 that is returned. He wrote that both the comma and assignment returns a value, not a reference. I don't quite understand what that means. I understand it in relation to passing variables into functions or methods i.e primitive types are passed in by value and objects by reference but I'm not sure how it applies in this case. I also understand about context and the value of 'this' (after help from stackoverflow) but I thought in both cases I would still be invoking it as a method, foo.bar() which would mean foo is the context but it seems both result in a function call bar(). Why is that and what does it all mean? var x = 10; var foo = { x: 20, bar: function () {return this.x;} }; (foo.bar = foo.bar)();//returns 10 (foo.bar, foo.bar)();//returns 10

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >