Search Results

Search found 4593 results on 184 pages for 'operator equal'.

Page 54/184 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Calling Base Class Functions with Inherited Type

    - by Kein Mitleid
    I can't describe exactly what I want to say but I want to use base class functions with an inherited type. Like I want to declare "Coord3D operator + (Coord3D);" in one class, but if I use it with Vector3D operands, I want it to return Vector3D type instead of Coord3D. With this line of code below, I add two Vector3D's and get a Coord3D in return, as told to me by the typeid().name() function. How do I reorganize my classes so that I get a Vector3D on return? #include <iostream> #include <typeinfo> using namespace std; class Coord3D { public: float x, y, z; Coord3D (float = 0.0f, float = 0.0f, float = 0.0f); Coord3D operator + (Coord3D &); }; Coord3D::Coord3D (float a, float b, float c) { x = a; y = b; z = c; } Coord3D Coord3D::operator+ (Coord3D &param) { Coord3D temp; temp.x = x + param.x; temp.y = y + param.y; temp.z = z + param.z; return temp; } class Vector3D: public Coord3D { public: Vector3D (float a = 0.0f, float b = 0.0f, float c = 0.0f) : Coord3D (a, b, c) {}; }; int main () { Vector3D a (3, 4, 5); Vector3D b (6, 7, 8); cout << typeid(a + b).name(); return 0; }

    Read the article

  • "end()" iterator for back inserters?

    - by Thanatos
    For iterators such as those returned from std::back_inserter(), is there something that can be used as an "end" iterator? This seems a little nonsensical at first, but I have an API which is: template<typename InputIterator, typename OutputIterator> void foo( InputIterator input_begin, InputIterator input_end, OutputIterator output_begin, OutputIterator output_end ); foo performs some operation on the input sequence, generating an output sequence. (Who's length is known to foo but may or may not be equal to the input sequence's length.) The taking of the output_end parameter is the odd part: std::copy doesn't do this, for example, and assumes you're not going to pass it garbage. foo does it to provide range checking: if you pass a range too small, it throws an exception, in the name of defensive programming. (Instead of potentially overwriting random bits in memory.) Now, say I want to pass foo a back inserter, specifically one from a std::vector which has no limit outside of memory constraints. I still need a "end" iterator - in this case, something that will never compare equal. (Or, if I had a std::vector but with a restriction on length, perhaps it might sometimes compare equal?) How do I go about doing this? I do have the ability to change foo's API - is it better to not check the range, and instead provide an alternate means to get the required output range? (Which would be needed anyways for raw arrays, but not required for back inserters into a vector.) This would seem less robust, but I'm struggling to make the "robust" (above) work.

    Read the article

  • Why `is_base_of` works with private inheritance?

    - by Alexey Malistov
    Why the following code works? typedef char (&yes)[1]; typedef char (&no)[2]; template <typename B, typename D> struct Host { operator B*() const; operator D*(); }; template <typename B, typename D> struct is_base_of { template <typename T> static yes check(D*, T); static no check(B*, int); static const bool value = sizeof(check(Host<B,D>(), int())) == sizeof(yes); }; //Test sample class B {}; class D : private B {}; //Exspression is true. int test[is_base_of<B,D>::value && !is_base_of<D,B>::value]; Note that B is private base. Note that operator B*() is const. How does this work? Why this works? Why static yes check(D*, T); is better than static yes check(B*, int); ?

    Read the article

  • Parsing: How to make error recovery in grammars like " a* b*"?

    - by Lavir the Whiolet
    Let we have a grammar like this: Program ::= a* b* where "*" is considered to be greedy. I usually implement "*" operator naively: Try to apply the expression under "*" to input one more time. If it has been applied successfully then we are still under current "*"-expression; try to apply the expression under "*" one more time. Otherwise we have reached next grammar expression; put characters parsed by expression under "*" back into input and proceed with next expression. But if there are errors in input in any of "a*" or "b*" part such a parser will "think" that in position of error both "a*" and "b*" have finished ("let's try "a"... Fail! OK, it looks like we have to proceed to "b*". Let's try "b"... Fail! OK, it looks like the string should have been finished...). For example, for string "daaaabbbbbbc" it will "say": "The string must end at position 1, delete superflous characters: daaaabbbbbbc". In short, greedy "*" operator becomes lazy if there are errors in input. How to make "*" operator to recover from errors nicely?

    Read the article

  • C++ Returning Pointers/References

    - by m00st
    I have a fairly good understanding of the dereferencing operator, the address of operator, and pointers in general. I however get confused when I see stuff such as this: int* returnA() { int *j = &a; return j; } int* returnB() { return &b; } int& returnC() { return c; } int& returnC2() { int *d = &c; return *d; } In returnA() I'm asking to return a pointer; just to clarify this works because j is a pointer? In returnB() I'm asking to return a pointer; since a pointer points to an address, the reason why returnB() works is because I'm returning &b? In returnC() I'm asking for an address of int to be returned. When I return c is the & operator automatically "appended" c? In returnC2() I'm asking again for an address of int to be returned. Does *d work because pointers point to an address? Assume a, b, c are initialized as integers. Can someone validate if I am correct with all four of my questions?

    Read the article

  • code is not compiling

    - by user323422
    template< class Type ,int Size = 3> class cStack { Type *m_array; int m_Top; int m_Size; public:cStack(); friend std::ostream& operator <<(std::ostream &, const cStack<Type,Size> &); }; template< class Type ,int Size > std::ostream& operator << ( std::ostream &os, const cStack<Type,Size> &s) { for( int i=0; i<=s.GetTop();i++) { os << s.m_array[i]; } return os; } on compilin it showing following error error LNK2019: unresolved external symbol "class std::basic_ostream<char,struct std::char_traits<char> > & __cdecl operator<<(class std::basic_ostream<char,struct std::char_traits<char> > &,class cStack<int,3> const &)" (??6@YAAAV?$basic_ostream@DU?$char_traits@D@std@@@std@@AAV01@ABV?$cStack@H$02@@@Z) referenced in function _main

    Read the article

  • Looping through NSArray while subtracting values from each object? [on hold]

    - by Julian
    I have NSNumber objects stored in an NSMutableArray. I am attempting to perform a calculation on each object in the array. What I want to do is: 1) Take a random higher number variable and keep subtracting a smaller number variable in increments until the value of the variable is equal to the value in the array. For example: NSMutableArray object is equal to 2.50. I have an outside variable of 25 that is not in the array. I want to subtract 0.25 multiple times from the variable until I reach less than or equal to 2.50. I also need a parameter so if the number does not divide evenly and goes below the array value, it resorts to the original array value of 2.50. Lastly, for each iteration, I want to print the values as they are counting down as a string. I was going to provide code, but I don't want to make this more confusing than it has to be. So my output would be: VALUE IS: 24.75 VALUE IS: 24.50 VALUE IS: 24.25 … VALUE IS: 2.50 END

    Read the article

  • c++ overloading delete, retrieve size

    - by user300713
    Hi, I am currently writing a small custom memory Allocator in c++, and want to use it together with operator overloading of new/delete. Anyways, my memory Allocator basicall checks if the requested memory is over a certain threshold, and if so uses malloc to allocate the requested memory chunk. Otherwise the memory will be provided by some fixedPool allocators. that generally works, but for my deallocation function looks like this: void MemoryManager::deallocate(void * _ptr, size_t _size){ if(_size heapThreshold) deallocHeap(_ptr); else deallocFixedPool(_ptr, _size); } so I need to provide the size of the chunk pointed to, to deallocate from the right place. No the problem is that the delete keyword does not provide any hint on the size of the deleted chunk, so I would need something like this: void operator delete(void * _ptr, size_t _size){ MemoryManager::deallocate(_ptr, _size); } But as far as I can see, there is no way to determine the size inside the delete operator.- If I want to keep things the way it is right now, would I have to save the size of the memory chunks myself? Any ideas on how to solve this are welcome! Thanks!

    Read the article

  • Friends, templates, overloading <<

    - by Crystal
    I'm trying to use friend functions to overload << and templates to get familiar with templates. I do not know what these compile errors are: Point.cpp:11: error: shadows template parm 'class T' Point.cpp:12: error: declaration of 'const Point<T>& T' for this file #include "Point.h" template <class T> Point<T>::Point() : xCoordinate(0), yCoordinate(0) {} template <class T> Point<T>::Point(T xCoordinate, T yCoordinate) : xCoordinate(xCoordinate), yCoordinate(yCoordinate) {} template <class T> std::ostream &operator<<(std::ostream &out, const Point<T> &T) { std::cout << "(" << T.xCoordinate << ", " << T.yCoordinate << ")"; return out; } My header looks like: #ifndef POINT_H #define POINT_H #include <iostream> template <class T> class Point { public: Point(); Point(T xCoordinate, T yCoordinate); friend std::ostream &operator<<(std::ostream &out, const Point<T> &T); private: T xCoordinate; T yCoordinate; }; #endif My header also gives the warning: Point.h:12: warning: friend declaration 'std::ostream& operator<<(std::ostream&, const Point<T>&)' declares a non-template function Which I was also unsure why. Any thoughts? Thanks.

    Read the article

  • Templates, Function Pointers and C++0x

    - by user328543
    One of my personal experiments to understand some of the C++0x features: I'm trying to pass a function pointer to a template function to execute. Eventually the execution is supposed to happen in a different thread. But with all the different types of functions, I can't get the templates to work. #include `<functional`> int foo(void) {return 2;} class bar { public: int operator() (void) {return 4;}; int something(int a) {return a;}; }; template <class C> int func(C&& c) { //typedef typename std::result_of< C() >::type result_type; typedef typename std::conditional< std::is_pointer< C >::value, std::result_of< C() >::type, std::conditional< std::is_object< C >::value, std::result_of< typename C::operator() >::type, void> >::type result_type; result_type result = c(); return result; } int main(int argc, char* argv[]) { // call with a function pointer func(foo); // call with a member function bar b; func(b); // call with a bind expression func(std::bind(&bar::something, b, 42)); // call with a lambda expression func( [](void)->int {return 12;} ); return 0; } The result_of template alone doesn't seem to be able to find the operator() in class bar and the clunky conditional I created doesn't compile. Any ideas? Will I have additional problems with const functions?

    Read the article

  • Checked and Unchecked operators don't seem to be working when...

    - by flockofcode
    1) Is UNCHECKED operator in effect only when expression inside UNCHECKED context uses an explicit cast ( such as byte b1=unchecked((byte)2000); ) and when conversion to particular type can happen implicitly? I’m assuming this since the following expression throws a compile time error: byte b1=unchecked(2000); //compile time error 2) a) Do CHECKED and UNCHECKED operators work only when resulting value of an expression or conversion is of an integer type? I’m assuming this since in the first example ( where double type is being converted to integer type ) CHECKED operator works as expected: double m = double.MaxValue; b=checked((byte)m); // reports an exception , while in second example ( where double type is being converted to a float type ) CHECKED operator doesn’t seem to be working. since it doesn't throw an exception: double m = double.MaxValue; float f = checked((float)m); // no exception thrown b) Why don’t the two operators also work with expressions where type of a resulting value is of floating-point type? 2) Next quote is from Microsoft’s site: The unchecked keyword is used to control the overflow-checking context for integral-type arithmetic operations and conversions I’m not sure I understand what exactly have expressions and conversions such as unchecked((byte)(100+200)); in common with integrals? Thank you

    Read the article

  • F# How to tokenise user input: separating numbers, units, words?

    - by David White
    I am fairly new to F#, but have spent the last few weeks reading reference materials. I wish to process a user-supplied input string, identifying and separating the constituent elements. For example, for this input: XYZ Hotel: 6 nights at 220EUR / night plus 17.5% tax the output should resemble something like a list of tuples: [ ("XYZ", Word); ("Hotel:", Word); ("6", Number); ("nights", Word); ("at", Operator); ("220", Number); ("EUR", CurrencyCode); ("/", Operator); ("night", Word); ("plus", Operator); ("17.5", Number); ("%", PerCent); ("tax", Word) ] Since I'm dealing with user input, it could be anything. Thus, expecting users to comply with a grammar is out of the question. I want to identify the numbers (could be integers, floats, negative...), the units of measure (optional, but could include SI or Imperial physical units, currency codes, counts such as "night/s" in my example), mathematical operators (as math symbols or as words including "at" "per", "of", "discount", etc), and all other words. I have the impression that I should use active pattern matching -- is that correct? -- but I'm not exactly sure how to start. Any pointers to appropriate reference material or similar examples would be great.

    Read the article

  • short-cutting equality checking in F#?

    - by John Clements
    In F#, the equality operator (=) is generally extensional, rather than intensional. That's great! Unfortunately, it appears to me that F# does not use pointer equality to short-cut these extensional comparisons. For instance, this code: type Z = MT | NMT of Z ref // create a Z: let a = ref MT // make it point to itself: a := NMT a // check to see whether it's equal to itself: printf "a = a: %A\n" (a = a) ... gives me a big fat segmentation fault[*], despite the fact that 'a' and 'a' both evaluate to the same reference. That's not so great. Other functional languages (e.g. PLT Scheme) get this right, using pointer comparisons conservatively, to return 'true' when it can be determined using a pointer comparison. So: I'll accept the fact that F#'s equality operator doesn't use short-cutting; is there some way to perform an intensional (pointer-based) equality check? The (==) operator is not defined on my types, and I'd love it if someone could tell me that it's available somehow. Or tell me that I'm wrong in my analysis of the situation: I'd love that, too... [*] That would probably be a stack overflow on Windows; there are things about Mono that I'm not that fond of...

    Read the article

  • Another boost error

    - by user1676605
    On this code I get the enourmous error static void ParseTheCommandLine(int argc, char *argv[]) { int count; int seqNumber; namespace po = boost::program_options; std::string appName = boost::filesystem::basename(argv[0]); po::options_description desc("Generic options"); desc.add_options() ("version,v", "print version string") ("help", "produce help message") ("sequence-number", po::value<int>(&seqNumber)->default_value(0), "sequence number") ("pem-file", po::value< vector<string> >(), "pem file") ; po::positional_options_description p; p.add("pem-file", -1); po::variables_map vm; po::store(po::command_line_parser(argc, argv). options(desc).positional(p).run(), vm); po::notify(vm); if (vm.count("pem file")) { cout << "Pem files are: " << vm["pem-file"].as< vector<string> >() << "\n"; } cout << "Sequence number is " << seqNumber << "\n"; exit(1); ../../../FIXMarketDataCommandLineParameters/FIXMarketDataCommandLineParameters.hpp|98|error: no match for ‘operator<<’ in ‘std::operator<< [with _Traits = std::char_traits](((std::basic_ostream &)(& std::cout)), ((const char*)"Pem files are: ")) << ((const boost::program_options::variable_value*)vm.boost::program_options::variables_map::operator[](((const std::string&)(& std::basic_string, std::allocator (((const char*)"pem-file"), ((const std::allocator&)((const std::allocator*)(& std::allocator()))))))))-boost::program_options::variable_value::as with T = std::vector, std::allocator , std::allocator, std::allocator ’|

    Read the article

  • Trouble recording unique regex output to array in perl

    - by Structure
    The goal of the following code sample is to read the contents of $target and assign all unique regex search results to an array. I have confirmed my regex statement works so I am simplifying that so as not to focus on it. When I execute the script I get a list of all the regex results, however, the results are not unique which leads me to believe that my manipulation of the array or my if (grep{$_ eq $1} @array) { check is causing a problem(s). #!/usr/bin/env perl $target = "string to search"; $inc = 0; $once = 1; while ($target =~ m/(regex)/g) { #While a regex result is returned if ($once) { #If $once is not equal to zero @array[$inc] = $1; #Set the first regex result equal to @array[0] $once = 0; #Set $once equal to zero so this is not executed more than once } else { if (grep{$_ eq $1 } @array ) { #From the second regex result, check to see if the result is already in the array #If so, do nothing } else { @array[$inc] = $1; #If it is not, then assign the regex search result to the next unused position in the array in any position. $inc++; #Increment to next unused array position. } } } print @array; exit 0;

    Read the article

  • Syntax Problems of if Statement (php)

    - by MxmastaMills
    I need a little help with an if statement in php. I'm trying to set a variable called offset according to a page that I am loading in WordPress. Here's the variable: $offset = ($paged * 6); What it does is it loads the first page, which is: http://example.com/blog and $offset is thus set to 0 because $paged is referring to the appending number on the URL. The second page, for example is: http://example.com/blog/2/ which makes $offset set to 12. The problem is, I need the second page to define $offset as 6, the third page to define $offset as 12, etc. I tried using: $offset = ($paged * 6 - 6) which works except on the first page. On the first page it defines $offset as -6. SO, I wanted to create an if statement that says if $paged is equal to 0 then $offset is equal to 0, else $offset is equal to ($paged * 6 - 6). I struggle with syntax, even though I understand what needs to be done here. Any help would be greatly appreciated. Thanks!

    Read the article

  • Why is this removing all elements from my LinkedList?

    - by Brian
    Why is my remove method removing every element from my Doubly Linked List? If I take out that if/else statements then I can successfully remove middle elements, but elements at the head or tail of the list still remain. However, I added the if/else statements to take care of elements at the head and tail, unfortunately this method now removes every element in my list. What am I do wrong? public void remove(int n) { LinkEntry<E> remove_this = new LinkEntry<E>(); //if nothing comes before remove_this, set the head to equal the element after remove_this if (remove_this.previous == null) head = remove_this.next; //otherwise set the element before remove_this equal to the element after remove_this else remove_this.previous.next = remove_this.next; //if nothing comes after remove_this, set the tail equal to the element before remove_this if (remove_this.next == null) tail = remove_this.previous; //otherwise set the next element's previous pointer to the element before remove_this else remove_this.next.previous = remove_this.previous; //if remove_this is located in the middle of the list, enter this loop until it is //found, then remove it, closing the gap afterwards. int i = 0; for (remove_this = head; remove_this != null; remove_this = remove_this.next) { //if i == n, stop and delete 'remove_this' from the list if (i == n) { //set the previous element's next to the element that comes after remove_this remove_this.previous.next = remove_this.next; //set the element after remove_this' previous pointer to the element before remove_this remove_this.next.previous = remove_this.previous; break; } //if i != n, keep iterating through the list i++; } }

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Apple Mail authentication failure to Apache James while Thunderbird connects

    - by dacracot
    I have an Apache James 2.3.2 email server running on RHEL 5. I have been connecting to it successfully for months using Thunderbird (currently version 12.0.1). I am attempting to connect to the same account using Apple's Mail 6.5. On the first dialog, to add an account to Apple's Mail, it asks for full name, email address, and password. It then asks for an incoming mail server. I put account type equal to POP, the incoming mail server equal to the host in my email address, and my username and password. It comes back with the error: "Logging in to the POP server "" failed. Make sure the user name and password you entered are correct, then click Continue. If the information isn't correct, you cannot receive messages." While the dialogs are different in Thunderbird, I believe that I am giving it exactly the same parameters, and succeeding with authentication.

    Read the article

  • Apple Mail authentication failure to Apache James while Thunderbird connects

    - by dacracot
    I have an Apache James 2.3.2 email server running on RHEL 5. I have been connecting to it successfully for months using Thunderbird (currently version 12.0.1). I am attempting to connect to the same account using Apple's Mail 6.5. On the first dialog, to add an account to Apple's Mail, it asks for full name, email address, and password. It then asks for an incoming mail server. I put account type equal to POP, the incoming mail server equal to the host in my email address, and my username and password. It comes back with the error: "Logging in to the POP server "" failed. Make sure the user name and password you entered are correct, then click Continue. If the information isn't correct, you cannot receive messages." While the dialogs are different in Thunderbird, I believe that I am giving it exactly the same parameters, and succeeding with authentication.

    Read the article

  • BES 5.0 and MAPI calls to exchange system

    - by nysingh
    We have been using BES 4.1(5) for a while now and it has been a resource hog on exchange due to high number of MAPI calls. I have heard that BES 5.0 is even worse. the comparison i heard is that BES 4.1 is makes MAPI calls equal to 5 outlook clients per BB user and BES 5.0 makes MAPI calls equal to 10 outlook clients per BB user. can someone confirm if it is true? is BES 5.0 is really that bad in MAPI calls and for exchange performance. ? thanks

    Read the article

  • MSDeploy - possible to call setAcl on multiple destinations in one go?

    - by growse
    I'm building a nice little continuous integration environment for our development team, based on TeamCity. It's working rather nicely, as it can build a mix of .NET and PHP projects, and push them to our internal and external platforms. I'm primarily using MsDeploy to push everything to the internal platform, as that's all IIS based. However, there's a number of builds where I need to set directory permissions on the destination directory. I can use the setAcl operator just fine, but that only seems to take a single destination as an argument. Therefore, if I need to alter the permissions on 5 destination directories, I need to call MsDeploy 5 times, which seems a lot of overhead. Is there a sensible way around this? Reading the documentation, I don't think MsDeploy takes more than a single argument for the setAcl operator, but could be wrong. Is there a better way for a build server to set multiple directory permissions in one go?

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    Hi all. We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Dynamically reference a Named Table Column via cell content in Excel

    - by rcphq
    How do I reference an Excel Table column dynamically in Excel 2007? ie: i wanna reference a named column of a named table and what table it is will vary with the value of a cell. I have a Table in Excel (Let's call it Table1). I want to reference one of its columns (Let's call it column1) dynamically from a value in another cell (A1) so that I can achieve the following result: When I change A1, the formula that counts Table1[DynamicallyReferencedColumnName] gets updated to the new reference. I tried using =Count(Table1[INDIRECT("$A$1")]) but Excel says the formula contains an error. Example: A1 = names then the formula would equal Count(Table1[names]). A1 = lastname then the formula would equal Count(Table1[lastname]).

    Read the article

  • Criteria strings, how many different criteria can be entered to retrieve specific data?

    - by Janet
    For our membership database we are currently using an old DOS program "Arclist". The program is old but the one feature we desperately need in a database program is to be able to enter multiple criteria at one time for more of a "one time" extraction of the data meeting all the various criteria entered in what I call a "criteria string". An example may be extracting only those records with zip codes matching (67893, 54235, 54323, 54201, 54302, 54303, 54301, 67894, 67895). Another set of criteria might be to omit records, not equal to, one type of criteria in one field and also extract records matching criteria in another field. So we would want records "not equal to" in one field, but whose information equals requested information in another field.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >