Search Results

Search found 37748 results on 1510 pages for 'scalar function'.

Page 172/1510 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • can function return 0 as reference

    - by helloWorld
    I have this snippet of the code Account& Company::findAccount(int id){ for(list<Account>::const_iterator i = listOfAccounts.begin(); i != listOfAccounts.end(); ++i){ if(i->nID == id){ return *i; } } return 0; } Is this right way to return 0 if I didn't find appropriate account? cause I receive an error: no match for 'operator!' in '!((Company*)this)->Company::findAccount(id)' I use it this way: if(!(findAccount(id))){ throw "hey"; } thanks in advance

    Read the article

  • Operator & and * at function prototipe in class

    - by Puyover
    I'm having a problem with a class like this: class Sprite { ... bool checkCollision(Sprite &spr); ... }; So, if I have that clase, I can do this: ball.checkCollision(bar1); But if I change the class to this: class Sprite { ... bool checkCollision(Sprite* spr); ... }; I have to do this: ball.checkCollision(&bar1); So, what's the difference?? It's better a way instead other? Thank you.

    Read the article

  • php in background exec() function

    - by albertopriore
    Hi! I made this script to test the execution of php in background foreach($tests as $test) { exec("php test.php ".$test["id"]); } to run php in background like suggested in php process background and How to add large number of event notification reminder via Google Calendar API using PHP? and php execute a background process But the script do not run faster than when it was all in one script without the addition of test.php. what I'm doing wrong? thanks in advance!

    Read the article

  • custom function is not getting called

    - by nectar
    here my code - $child1 = create_childid()."01"; $sqltree = "INSERT INTO tbltree (`userId`, `level`, `superId`, `rootId`, `childcount`) VALUES ('$child1', '1', '$newid', '$myroot', '0');"; mysql_query($sqltree); echo $newid; update_level(); $child2 = create_childid()."02"; $sqltree = "INSERT INTO tbltree (`userId`, `level`, `superId`, `rootId`, `childcount`) VALUES ('$child2', '1', '$newid', '$myroot', '0');"; mysql_query($sqltree); update_level(); $child3 = create_childid()."03"; $sqltree = "INSERT INTO tbltree (`userId`, `level`, `superId`, `rootId`, `childcount`) VALUES ('$child3', '1', '$newid', '$myroot', '0');"; mysql_query($sqltree); update_level(); $child4 = create_childid()."04"; $sqltree = "INSERT INTO tbltree (`userId`, `level`, `superId`, `rootId`, `childcount`) VALUES ('$child4', '1', '$newid', '$myroot', '0');"; mysql_query($sqltree); update_level(); $child5 = create_childid()."05"; $sqltree = "INSERT INTO tbltree (`userId`, `level`, `superId`, `rootId`, `childcount`) VALUES ('$child5', '1', '$newid', '$myroot', '0');"; mysql_query($sqltree); update_level(); ERROR : update_level(); is executing only once why??

    Read the article

  • How to assign variable dynamically to php list function

    - by ravisoni
    What I am doing that I want to generate a list based on how many items are in an array, so I have counted the items and loop over them, create a number based var and construct a string $var which contains $a1,$a2.... and assigns the $var to list list($var) and tried to access $a1 but it gives me the error "Undefined variable: a1" Is there any other way to do it? Here is my code: $arr = array('1','2','3'); $listsize = count($arr); $var=''; for($i=1;$i<=$listsize;$i++){ $var.='$a'.$i; if($i!=$listsize){ $var.=','; } } list($var) = $arr; echo $a1;

    Read the article

  • Excel function advanced filter

    - by Adam
    I have a list of sales people and a list of their sale revenues in two separate columns. How do I use an advanced filter or other sorting means to find the max of the sale revenue column and then have the formula output be the corresponding sales person?

    Read the article

  • Can you explain how this ip_to_string function works?

    - by user198729
    #define IPTOSBUFFERS 12 char *iptos(u_long in) { static char output[IPTOSBUFFERS][3*4+3+1]; static short which; u_char *p; p = (u_char *)&in; which = (which + 1 == IPTOSBUFFERS ? 0 : which + 1); _snprintf_s(output[which], sizeof(output[which]), sizeof(output[which]),"%d.%d.%d.%d", p[0], p[1], p[2], p[3]); return output[which]; } Is there something I'm missing to understand it?

    Read the article

  • How Can I Implement This Function?

    - by hoora
    I'm a beginner and I want to write Java code in eclipse. This program takes two LinkedLists of integers (for example, a and b) and makes a LinkedList (for example d) in which every element is the sum of elements from a and b. However, I can't add these two elements from a and b because they are Objects Example: a=[3,4,6,7,8] b=[4,3,7,5,3,2,1] ------ d=[7,7,13,12,11,2,1]

    Read the article

  • JQuery html function on a td

    - by Abs
    Hello all, I am trying to place some html within a td that has an id but this doesn't seem to work. Can JQuery not put html in a td by using its id as a selector?? I do this: $('#total_match').html("<b>Test</b>"); Here is the td: <td id="total_match"></td> No errors occur and nothing appears in the TD - I view the source to confirm. I appreciate any help.

    Read the article

  • JQuery Date() Function Not Working

    - by bdaniels
    Anyone know why this doesn't work? var lastReceivedBeginDate = new Date($("input[name='lastReceivedFromYear']").val(),$("input[name='lastReceivedFromMonth']").val(),$("input[name='lastReceivedFromDay']").val(),$("input[name='lastReceivedFromHour']").val(),$("input[name='lastReceivedFromMinute']").val(),$("input[name='lastReceivedFromSecond']").val()); Thx

    Read the article

  • private virtual function in derived class

    - by user1706047
    class base { public: virtual void doSomething() = 0; }; class derived : public base { **private:** virtual void doSomething(){cout<<"Derived fn"<<endl;} }; now if i do the following: base *b=new child; b->doSomething(); //it calls the derived class fn even if that is private. Question: 1.its able to call the derived class fn even if that is private.How is it possible? Now if i change the inheritance access specifier from public to protected/private then i get compilation error as "'type cast' : conversion from 'Derived *' to 'base *' exists, but is inaccessible" Notes: I am aware on the concepts of the inheritance access specifiers.So in second case as its derived private/protected, its inaccessible. But here it confuses me for the first question. Any input will be highly appreciated

    Read the article

  • Friendly way to override `const`-overloaded member function?

    - by xtofl
    Given a base class class A { int i; public: int& f(){ return i;} const int& f() const { return i;} }; And a sub class class ConstA : private A { public: const int& f() const { return A::f(); } }; Is there a wrist-friendly way to access the ConstA::f method on a non-const variable? ConstA ca; int i = ca.f(); // compile error: int& A::f() is not accessible since A is privately inherited int j = static_cast<const ConstA&>(ca).f(); // this works, but it hurts a little... Or is it so ugly since hiding A::f generally is a bad idea, violating the Liskov Substitution Principle: any subclass of A must at least be capable of all A's functionality? void set( A& a, int i ) { a.f() = i; } class ConstA2 : public A { private: int& f(){ return A::f(); } }; ConstA2 ca2; set( ca2, 1 ); (Note: this question popped up while thinking about this question)

    Read the article

  • Call to undefined function apache_request_headers()

    - by Rob
    I've just switched my scripts to a different server. On the previous server this worked flawlessly, and now that I've switched them to a different server, I can't understand the problem. I'm not sure it would help, but here's the relevant code. $headers = apache_request_headers(); PHP Version is: PHP 5.3.2 Any help would be appreciated.

    Read the article

  • Remove seconds from the TIME function?

    - by user1876032
    Okay so have a database that uses the time functions for a list of events and then displays the data as: echo "<b>Time Frame:"; echo "$time_start"; echo "&nbsp;-&nbsp;"; echo "$time_end"; Although that displays the time as 11:00:00 if i want the time to be 11:00 am. Is there anyway to make this time display as a standard "11:00"? and also if in the mysql datababse enter it as military time (ex. 13:00) to make it display 1:00pm? I have tried many things. Please help. http://pastebin.com/kaTZzGrx

    Read the article

  • Function Procedure in Java

    - by Lhea Bernardino
    Create a program that will ask the user to enter 3 numbers using for loop, then test the numbers and display it in ascending order. Sample Input: 5 2 7 Sample Output: 2 5 7 I'm stuck with the testing procedure. I don't know how to test the numbers since the variable holding those numbers is just a single variable and inside the for loop here is my sample code: import javax.swing.JOptionPane; public class ascending { public static void main(String args[]) { for(int x= 0; x<3;x++) { String Snum = JOptionPane.showInputDialog("Enter a number"); int num = Integer.parseInt(Snum); } <Here comes the program wherein I will test the 3 numbers inputted by the user and display in ascending order. I don't know where to start. :'( >

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >