Search Results

Search found 2533 results on 102 pages for 'typecast operator'.

Page 34/102 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • C++ assignment operators dynamic arrays

    - by user2905445
    First off i know the multiplying part is wrong but i have some questions about the code. 1. When i am overloading my operator+ i print out the matrix using cout << *this then right after i return *this and when i do a+b on matix a and matix b it doesnt give me the same thing this is very confusing. 2. When i make matrix c down in my main i cant use my default constructor for some reason because when i go to set it = using my assignment operator overloaded function it gives me an error saying "expression must be a modifiable value. although using my constructor that sets the row and column numbers is the same as my default constructor using (0,0). 3. My assignment operator= function uses a copy constructor to make a new matrix using the values on the right hand side of the equal sign and when i print out c it doesn't give me anything Any help would be great this is my hw for a algorithm class which i still need to do the algorithm for the multiplying matrices but i need to solve these issues first and im having a lot of trouble please help. //Programmer: Eric Oudin //Date: 10/21/2013 //Description: Working with matricies #include <iostream> using namespace std; class matrixType { public: friend ostream& operator<<(ostream&, const matrixType&); const matrixType& operator*(const matrixType&); matrixType& operator+(const matrixType&); matrixType& operator-(const matrixType&); const matrixType& operator=(const matrixType&); void fillMatrix(); matrixType(); matrixType(int, int); matrixType(const matrixType&); ~matrixType(); private: int **matrix; int rowSize; int columnSize; }; ostream& operator<< (ostream& osObject, const matrixType& matrix) { osObject << endl; for (int i=0;i<matrix.rowSize;i++) { for (int j=0;j<matrix.columnSize;j++) { osObject << matrix.matrix[i][j] <<", "; } osObject << endl; } return osObject; } const matrixType& matrixType::operator=(const matrixType& matrixRight) { matrixType temp(matrixRight); cout << temp; return temp; } const matrixType& matrixType::operator*(const matrixType& matrixRight) { matrixType temp(rowSize*matrixRight.columnSize, columnSize*matrixRight.rowSize); if(rowSize == matrixRight.columnSize) { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { temp.matrix[i][j] = matrix[i][j] * matrixRight.matrix[i][j]; } } } else { cout << "Cannot multiply matricies that have different size rows from the others columns." << endl; } return temp; } matrixType& matrixType::operator+(const matrixType& matrixRight) { if(rowSize == matrixRight.rowSize && columnSize == matrixRight.columnSize) { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { matrix[i][j] += matrixRight.matrix[i][j]; } } } else { cout << "Cannot add matricies that are different sizes." << endl; } cout << *this; return *this; } matrixType& matrixType::operator-(const matrixType& matrixRight) { matrixType temp(rowSize, columnSize); if(rowSize == matrixRight.rowSize && columnSize == matrixRight.columnSize) { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { matrix[i][j] -= matrixRight.matrix[i][j]; } } } else { cout << "Cannot subtract matricies that are different sizes." << endl; } return *this; } void matrixType::fillMatrix() { for (int i=0;i<rowSize;i++) { for (int j=0;j<columnSize;j++) { cout << "Enter the matix number at (" << i << "," << j << "):"; cin >> matrix[i][j]; } } } matrixType::matrixType() { rowSize=0; columnSize=0; matrix = new int*[rowSize]; for (int i=0; i < rowSize; i++) { matrix[i] = new int[columnSize]; } } matrixType::matrixType(int setRows, int setColumns) { rowSize=setRows; columnSize=setColumns; matrix = new int*[rowSize]; for (int i=0; i < rowSize; i++) { matrix[i] = new int[columnSize]; } } matrixType::matrixType(const matrixType& otherMatrix) { rowSize=otherMatrix.rowSize; columnSize=otherMatrix.columnSize; matrix = new int*[rowSize]; for (int i = 0; i < rowSize; i++) { for (int j = 0; j < columnSize; j++) { matrix[i]=new int[columnSize]; matrix[i][j]=otherMatrix.matrix[i][j]; } } } matrixType::~matrixType() { delete [] matrix; } int main() { matrixType a(2,2); matrixType b(2,2); matrixType c(0,0); cout << "fill matrix a:"<< endl;; a.fillMatrix(); cout << "fill matrix b:"<< endl;; b.fillMatrix(); cout << a; cout << b; c = a+b; cout <<"matrix a + matrix b =" << c; system("PAUSE"); return 0; }

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • How do I run the sphere-slicer.pl perl command to make a photo into a sphere?

    - by Mahdi Zenali
    I was looking for a program to slice pictures somehow to paste it on a globe(sphere). I found ip-slicer in this website. http://www.bruno.postle.net/2001/ip-slicer/ The problem I have is that I don't know where should I enter the command line. for example after running the program and entering this line "sphere-slicer.pl 16 1000 input.jpg" I get this this error Number found where operator expected at - line 72, near "pl 16" (Do you need to predeclare pl?) Number found where operator expected at - line 72, near "16 1000" (Missing operator before 1000?) Bareword found where operator expected at - line 72, near "1000 input" (Missing operator before input?) This program is written in perl language.

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • can I access a struct inside of a struct without using the dot operator?

    - by yan bellavance
    I have 2 structures that have 90% of their fields the same. I want to group those fields in a structure but I do not want to use the dot operator to access them. The reason is I already coded with the first structure and have just created the second one. before: struct{ int a; int b; int c; object1 name; } str1; struct{ int a; int b; int c; object2 name; } str2; now I would create a third struct: struct{ int a; int b; int c; } str3; and would change the str1 and atr2 to this: struct{ str3 str; object1 name; } str1; struct { str3 str; object2 name; } str2; Finally I would like to be able to access a,b and c by doing: str1 myStruct; myStruct.a; myStruct.b; myStruct.c; and not: myStruct.str.a; myStruct.str.b; myStruct.str.c; Is there a way to do such a thing. The reason for doing this is I want keep the integrety of the data if chnges to the struct were to occur and to not repeat myself and not have to change my existing code and not have fields nested too deeply. RESOLVED: thx for all your answers. The final way of doing it so that I could use auto-completion also was the following: struct str11 { int a; int b; int c; }; typedef struct str22 : public str11 { QString name; }hi;

    Read the article

  • Php plugin to replace '->' with '.' as the member access operator ? Or even better: alternative synt

    - by Gigi
    Present day usable solution: Note that if you use an ide or an advanced editor, you could make a code template, or record a macro that inserts '->' when you press Ctrl and '.' or something. Netbeans has macros, and I have recorded a macro for this, and I like it a lot :) (just click the red circle toolbar button (start record macro),then type -> into the editor (thats all the macro will do, insert the arrow into the editor), then click the gray square (stop record macro) and assign the 'Ctrl dot' shortcut to it, or whatever shortcut you like) The php plugin: The php plugin, would also have to have a different string concatenation operator than the dot. Maybe a double dot ? Yea... why not. All it has to do is set an activation tag so that it doesnt replace / interpreter '.' as '->' for old scripts and scripts that dont intent do use this. Something like this: <php+ $obj.i = 5 ?> (notice the modified '<?php' tag to '<?php+' ) This way it wouldnt break old code. (and you can just add the '<?php+' code template to your editor and then type 'php tab' (for netbeans) and it would insert '<?php+' ) With the alternative syntax method you could even have old and new syntax cohabitating on the same page like this (I am illustrating this to show the great compatibility of this method, not because you would want to do this): <?php+ $obj.i = 5; ?> <?php $obj->str = 'a' . 'b'; ?> You could change the tag to something more explanatory, in case somebody who doesnt know about the plugin reads the script and thinks its a syntax error <?php-dot.com $obj.i = 5; ?> This is easy because most editors have code templates, so its easy to assign a shortcut to it. And whoever doesnt want the dot replacement, doesnt have to use it. These are NOT ultimate solutions, they are ONLY examples to show that solutions exist, and that arguments against replacing '->' with '.' are only excuses. (Just admit you like the arrow, its ok : ) With this potential method, nobody who doesnt want to use it would have to use it, and it wouldnt break old code. And if other problems (ahem... excuses) arise, they could be fixed too. So who can, and who will do such a thing ?

    Read the article

  • Do you use logical negation operator (!) in "if" statement or check on "== false"

    - by Taras Terebkov
    Hello everyone, I just want to conduct a little survey about code style developers prefer. For me there are two ways to write "if" in such languages as Java, C#, C++, etc. (1) Logical negation operator public void foo() { if (!SessionManager.getInstance().hasActiveSession()) { . . . . . } } (2) Check on "false" public void foo() { if (SessionManager.getInstance().hasActiveSession() == false) { . . . . . } } I always believe that first way is much worst then the second one. Cause usually you don't "read" the code, but "recognize" it in one brief look. And exclamation symbol slipped from your mind, just disturbing you somewhere on the bottom of your unconscious. And only during reading the "if" block below you understand, that the logic is opposite - no sessions in "if" On the other hand in the second way of writing, an eye immediately catches words "SessionManager", "hasActiveSession" and "false". Also for me, the situation with "true" is different. In code like class SessionManager { private bool hasSession; public void foo() { if (hasSession == true) { . . . . . } else { . . . . . } } } I find "true" superfluous. why we repeating the sentence two times? The following is shorter and quicker to catch. class SessionManager { private bool hasSession; public void foo() { if (hasSession) { . . . . . } else { . . . . . } } } What do YOU think, guys?

    Read the article

  • Django filters - Using an AllValuesFilter (with a LinkWidget) on a ManyToManyField

    - by magnetix
    This is my first Stack Overflow question, so please let me know if I do anything wrong. I wish to create an AllValues filter on a ManyToMany field using the wonderful django-filters application. Basically, I want to create a filter that looks like it does in the Admin, so I also want to use the LinkWidget too. Unfortunately, I get an error (Invalid field name: 'operator') if I try this the standard way: # Models class Organisation(models.Model): name = models.CharField() ... class Sign(models.Model): name = models.CharField() operator = models.ManyToManyField('Organisation', blank=True) ... # Filter class SignFilter(LinkOrderFilterSet): operator = django_filters.AllValuesFilter(widget=django_filters.widgets.LinkWidget) class Meta: model = Sign fields = ['operator'] I got around this by creating my own filter with the many to many relationship hard coded: # Models class Organisation(models.Model): name = models.CharField() ... class Sign(models.Model): name = models.CharField() operator = models.ManyToManyField('Organisation', blank=True) ... # Filter class MyFilter(django_filters.ChoiceFilter): @property def field(self): cd = {} for row in self.model.objects.all(): orgs = row.operator.select_related().values() for org in orgs: cd[org['id']] = org['name'] choices = zip(cd.keys(), cd.values()) list.sort(choices, key=lambda x:(x[1], x[0])) self.extra['choices'] = choices return super(AllValuesFilter, self).field class SignFilter(LinkOrderFilterSet): operator = MyFilter(widget=django_filters.widgets.LinkWidget) I am new to Python and Django. Can someone think of a more generic/elegant way of doing this?

    Read the article

  • Is this overly clever or unsafe?

    - by Liberalkid
    I was working on some code recently and decided to work on my operator overloading in c++, because I've never really implemented it before. So I overloaded the comparison operators for my matrix class using a compare function that returned 0 if LHS was less than RHS, 1 if LHS was greater than RHS and 2 if they were equal. Then I exploited the properties of logical not in c++ on integers, to get all of my compares in one line: inline bool Matrix::operator<(Matrix &RHS){ return ! (compare(*this,RHS)); } inline bool Matrix::operator>(Matrix &RHS){ return ! (compare((*this),RHS)-1); } inline bool Matrix::operator>=(Matrix &RHS){ return compare((*this),RHS); } inline bool Matrix::operator<=(Matrix &RHS){ return compare((*this),RHS)-1; } inline bool Matrix::operator!=(Matrix &RHS){ return compare((*this),RHS)-2; } inline bool Matrix::operator==(Matrix &RHS){ return !(compare((*this),RHS)-2); } Obviously I should be passing RHS as a const, I'm just probably not going to use this matrix class again and I didn't feel like writing another function that wasn't a reference to get the array index values solely for the comparator operation.

    Read the article

  • Other ternary operators besides ternary conditional (?:)

    - by Malcolm
    The "ternary operator" expression is now almost equivalent to the ternary conditional operator: condition ? trueExpression : falseExpression; However, "ternary operator" only means that it takes three arguments. I'm just curious, are there any languages with any other built-in ternary operators besides conditional operator and which ones?

    Read the article

  • Problem separating C++ code in header, inline functions and code.

    - by YuppieNetworking
    Hello all, I have the simplest code that I want to separate in three files: Header file: class and struct declarations. No implementations at all. Inline functions file: implementation of inline methods in header. Code file: normal C++ code for more complicated implementations. When I was about to implement an operator[] method, I couldn't manage to compile it. Here is a minimal example that shows the same problem: Header (myclass.h): #ifndef _MYCLASS_H_ #define _MYCLASS_H_ class MyClass { public: MyClass(const int n); virtual ~MyClass(); double& operator[](const int i); double operator[](const int i) const; void someBigMethod(); private: double* arr; }; #endif /* _MYCLASS_H_ */ Inline functions (myclass-inl.h): #include "myclass.h" inline double& MyClass::operator[](const int i) { return arr[i]; } inline double MyClass::operator[](const int i) const { return arr[i]; } Code (myclass.cpp): #include "myclass.h" #include "myclass-inl.h" #include <iostream> inline MyClass::MyClass(const int n) { arr = new double[n]; } inline MyClass::~MyClass() { delete[] arr; } void MyClass::someBigMethod() { std::cout << "Hello big method that is not inlined" << std::endl; } And finally, a main to test it all: #include "myclass.h" #include <iostream> using namespace std; int main(int argc, char *argv[]) { MyClass m(123); double x = m[1]; m[1] = 1234; cout << "m[1]=" << m[1] << endl; x = x + 1; return 0; } void nothing() { cout << "hello world" << endl; } When I compile it, it says: main.cpp:(.text+0x1b): undefined reference to 'MyClass::MyClass(int)' main.cpp:(.text+0x2f): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x49): undefined reference to 'MyClass::operator[](int)' main.cpp:(.text+0x65): undefined reference to 'MyClass::operator[](int)' However, when I move the main method to the MyClass.cpp file, it works. Could you guys help me spot the problem? Thank you.

    Read the article

  • ASP.NET: Validate text box contains integer greater than equal to zero?

    - by User
    If I want to validate that a text box contains an integer greater than or equal to zero. Do I need to use TWO asp:CompareValidator controls: one with a DataTypeCheck operator and one with a GreaterThanEqual operator? Or is the datatype operator redundant? Can I just use a single validator with the GreaterThanEqual operator (and the type set to Integer)?

    Read the article

  • Linked List manipulation, issues retrieving data c++

    - by floatfil
    I'm trying to implement some functions to manipulate a linked list. The implementation is a template typename T and the class is 'List' which includes a 'head' pointer and also a struct: struct Node { // the node in a linked list T* data; // pointer to actual data, operations in T Node* next; // pointer to a Node }; Since it is a template, and 'T' can be any data, how do I go about checking the data of a list to see if it matches the data input into the function? The function is called 'retrieve' and takes two parameters, the data and a pointer: bool retrieve(T target, T*& ptr); // This is the prototype we need to use for the project "bool retrieve : similar to remove, but not removed from list. If there are duplicates in the list, the first one encountered is retrieved. Second parameter is unreliable if return value is false. E.g., " Employee target("duck", "donald"); success = company1.retrieve(target, oneEmployee); if (success) { cout << "Found in list: " << *oneEmployee << endl; } And the function is called like this: company4.retrieve(emp3, oneEmployee) So that when you cout *oneEmployee, you'll get the data of that pointer (in this case the data is of type Employee). (Also, this is assuming all data types have the apropriate overloaded operators) I hope this makes sense so far, but my issue is in comparing the data in the parameter and the data while going through the list. (The data types that we use all include overloads for equality operators, so oneData == twoData is valid) This is what I have so far: template <typename T> bool List<T>::retrieve(T target , T*& ptr) { List<T>::Node* dummyPtr = head; // point dummy pointer to what the list's head points to for(;;) { if (*dummyPtr->data == target) { // EDIT: it now compiles, but it breaks here and I get an Access Violation error. ptr = dummyPtr->data; // set the parameter pointer to the dummy pointer return true; // return true } else { dummyPtr = dummyPtr->next; // else, move to the next data node } } return false; } Here is the implementation for the Employee class: //-------------------------- constructor ----------------------------------- Employee::Employee(string last, string first, int id, int sal) { idNumber = (id >= 0 && id <= MAXID? id : -1); salary = (sal >= 0 ? sal : -1); lastName = last; firstName = first; } //-------------------------- destructor ------------------------------------ // Needed so that memory for strings is properly deallocated Employee::~Employee() { } //---------------------- copy constructor ----------------------------------- Employee::Employee(const Employee& E) { lastName = E.lastName; firstName = E.firstName; idNumber = E.idNumber; salary = E.salary; } //-------------------------- operator= --------------------------------------- Employee& Employee::operator=(const Employee& E) { if (&E != this) { idNumber = E.idNumber; salary = E.salary; lastName = E.lastName; firstName = E.firstName; } return *this; } //----------------------------- setData ------------------------------------ // set data from file bool Employee::setData(ifstream& inFile) { inFile >> lastName >> firstName >> idNumber >> salary; return idNumber >= 0 && idNumber <= MAXID && salary >= 0; } //------------------------------- < ---------------------------------------- // < defined by value of name bool Employee::operator<(const Employee& E) const { return lastName < E.lastName || (lastName == E.lastName && firstName < E.firstName); } //------------------------------- <= ---------------------------------------- // < defined by value of inamedNumber bool Employee::operator<=(const Employee& E) const { return *this < E || *this == E; } //------------------------------- > ---------------------------------------- // > defined by value of name bool Employee::operator>(const Employee& E) const { return lastName > E.lastName || (lastName == E.lastName && firstName > E.firstName); } //------------------------------- >= ---------------------------------------- // < defined by value of name bool Employee::operator>=(const Employee& E) const { return *this > E || *this == E; } //----------------- operator == (equality) ---------------- // if name of calling and passed object are equal, // return true, otherwise false // bool Employee::operator==(const Employee& E) const { return lastName == E.lastName && firstName == E.firstName; } //----------------- operator != (inequality) ---------------- // return opposite value of operator== bool Employee::operator!=(const Employee& E) const { return !(*this == E); } //------------------------------- << --------------------------------------- // display Employee object ostream& operator<<(ostream& output, const Employee& E) { output << setw(4) << E.idNumber << setw(7) << E.salary << " " << E.lastName << " " << E.firstName << endl; return output; } I will include a check for NULL pointer but I just want to get this working and will test it on a list that includes the data I am checking. Thanks to whoever can help and as usual, this is for a course so I don't expect or want the answer, but any tips as to what might be going wrong will help immensely!

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • How to achieve reliable Gigabit Ethernet Link with my Acer Aspire Revo R3610?

    - by The Operator
    I want to stream HD movies over my wired Gigabit LAN from my PC to my Acer Aspire Revo R3610. It's connected with a 3ft Cat5e patch cable to my Netgear GS605v2 Switch. The PC acting as File Server is connected at 1Gbps to the Switch. Network driver options are set to defaults, including automatic speed/duplex negotiation on both machines. The Revo will not connect to my Network Switch at 1Gbps - the OS reports that it reverts to 100Mbps either shortly after connection or immediately upon connection. Through a process of elimination (trying different drivers, patch cables, ports on the switch, and other 1Gbps-capable devices connected to the Network switch which successfully achieve 1Gbps links and performance) I have drawn the conclusion there is either a Hardware or Software (Driver) issue with the Revo itself. I have performed tests using Windows 7 and Ubuntu 9.10. Can anyone offer insight on Gigabit Ethernet with the Revo?

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • How does a template class inherit another template class?

    - by hkBattousai
    I have a "SquareMatrix" template class which inherits "Matrix" template class, like below: SquareMatrix.h: #ifndef SQUAREMATRIX_H #define SQUAREMATRIX_H #include "Matrix.h" template <class T> class SquareMatrix : public Matrix<T> { public: T GetDeterminant(); }; template <class T> // line 49 T SquareMatrix<T>::GetDeterminant() { T t = 0; // Error: Identifier "T" is undefined // line 52 return t; // Error: Expected a declaration // line 53 } // Error: Expected a declaration // line 54 #endif I commented out all other lines, the files contents are exactly as above. I receive these error messages: LINE 49: IntelliSense: expected a declaration LINE 52: IntelliSense: expected a declaration LINE 53: IntelliSense: expected a declaration LINE 54: error C2039: 'GetDeterminant' : is not a member of 'SquareMatrix' LINE 54: IntelliSense: expected a declaration So, what is the correct way of inheriting a template class? And what is wrong with this code? The "Matrix" class: template <class T> class Matrix { public: Matrix(uint64_t unNumRows = 0, uint64_t unNumCols = 0); void GetDimensions(uint64_t & unNumRows, uint64_t & unNumCols) const; std::pair<uint64_t, uint64_t> GetDimensions() const; void SetDimensions(uint64_t unNumRows, uint64_t unNumCols); void SetDimensions(std::pair<uint64_t, uint64_t> Dimensions); uint64_t GetRowSize(); uint64_t GetColSize(); void SetElement(T dbElement, uint64_t unRow, uint64_t unCol); T & GetElement(uint64_t unRow, uint64_t unCol); //Matrix operator=(const Matrix & rhs); // Compiler generate this automatically Matrix operator+(const Matrix & rhs) const; Matrix operator-(const Matrix & rhs) const; Matrix operator*(const Matrix & rhs) const; Matrix & operator+=(const Matrix & rhs); Matrix & operator-=(const Matrix & rhs); Matrix & operator*=(const Matrix & rhs); T& operator()(uint64_t unRow, uint64_t unCol); const T& operator()(uint64_t unRow, uint64_t unCol) const; static Matrix Transpose (const Matrix & matrix); static Matrix Multiply (const Matrix & LeftMatrix, const Matrix & RightMatrix); static Matrix Add (const Matrix & LeftMatrix, const Matrix & RightMatrix); static Matrix Subtract (const Matrix & LeftMatrix, const Matrix & RightMatrix); static Matrix Negate (const Matrix & matrix); // TO DO: static bool IsNull(const Matrix & matrix); static bool IsSquare(const Matrix & matrix); static bool IsFullRowRank(const Matrix & matrix); static bool IsFullColRank(const Matrix & matrix); // TO DO: static uint64_t GetRowRank(const Matrix & matrix); static uint64_t GetColRank(const Matrix & matrix); protected: std::vector<T> TheMatrix; uint64_t m_unRowSize; uint64_t m_unColSize; bool DoesElementExist(uint64_t unRow, uint64_t unCol); };

    Read the article

  • MySql select by column value. Separeta operator for columns.

    - by andy
    Hi all, i have a mysql table like this +-----+---------+-----------+-----------------+-------+ | id | item_id | item_type | field_name | data | +-----+---------+-----------+-----------------+-------+ | 258 | 54 | page | field_interests | 1 | | 257 | 54 | page | field_interests | 0 | | 256 | 54 | page | field_author | value | +-----+---------+-----------+-----------------+-------+ And, I need build query like this SELECT * FROM table WHERE `field_name`='field_author' AND `field_author.data` LIKE '%jo%' AND `field_name`='field_interests' AND `field_interests.data`='0' AND `field_name`='field_interests' AND `field_interests.data`='1' This is sample query. MySql can't do queries like that. I mean than SELECT * FROM table WHERE name='jonn' AND name='marry' will return 0 rows. Cant anybody help me. Thanks.

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in SQL Server?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking)

    Read the article

  • I (think) I want to use a BItWise Operator to check useraccountcontrol property!

    - by Jim
    Hello, Here's some code: DirectorySearcher searcher = new DirectorySearcher(); searcher.Filter = "(&(objectClass=user)(sAMAccountName=" + lstUsers.SelectedItem.Text + "))"; SearchResult result = searcher.FindOne(); Within result.Properties["useraccountcontrol"] will be an item which will give me a value depending on the state of the account. For instance, a value of 66050 means I'm dealing with: A normal account; where the password does not expire;which has been disabled. Explanation here. What's the most concise way of finding out if my value "contains" the AccountDisable flag (which is 2) Thanks in advance!

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in MSSQL?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking) Thanks, Eoin C

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >