Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 207/1038 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • NP-complete problem in Prolog

    - by Ashley
    I saw this ECLiPSe solution to the problem mentioned in this XKCD comic. I tried to convert this to pure Prolog. go:- Total = 1505, Prices = [215, 275, 335, 355, 420, 580], length(Prices, N), length(Amounts, N), totalCost(Prices, Amounts, 0, Total), writeln(Total). totalCost([], [], TotalSoFar, TotalSoFar). totalCost([P|Prices], [A|Amounts], TotalSoFar, EndTotal):- between(0, 10, A), Cost is P*A, TotalSoFar1 is TotalSoFar + Cost, totalCost(Prices, Amounts, TotalSoFar1, EndTotal). I don't think that this is the best / most declarative solution that one can come up with. Does anyone have any suggestions for improvement? Thanks in advance!

    Read the article

  • What is the best way to reduce code and loop through a hierarchial commission script?

    - by JM4
    I have a script which currently "works" but is nearly 3600 lines of code and makes well over 50 database calls within a single script. From my experience, there is no way to really "loop" the script and minimize it because each call to the database is a subquery of the ones before based on referral ids. Perhaps I can give a very simple example of what I am trying to accomplish and see if anybody has experience with something similar. In my example, there are three tables: Table 1 - Sellers ID | Comm_level | Parent ----------------------------------- 1 | 4 | NULL 2 | 3 | 1 3 | 2 | 1 4 | 2 | 2 5 | 2 | 2 6 | 1 | 3 Where ID is the id of one of our sales agents, comm_level will determine what his commission percentage is for each product he sells, parent indicates the ID for whom recruited that particular agent. In the example above, 1 is the top agent, he recruited two agents, 2 and 3. 2 recruited two agents, 4 and 5. 3 recruited one agent, 6. NOTE: An agent can NEVER recruit anybody equal to or higher than their own level. Table 2 - Commissions Level | Item 1 | Item 2 | Item 3 ----------------------------------------------------- 4 | .5 | .4 | .3 3 | .45 | .35 | .25 2 | .4 | .3 | .2 1 | .35 | .25 | .15 This table lays out the commission percentages for each agent based on their actual comm_level (if an agent is at a level 4, he will receive 50% on every item 1 sold, 40% on every item 2, 30% on every item 3 and so on. Table 3 - Items Sold ID | Item --------------------- 4 | item_1 4 | item_2 1 | item_1 2 | item_3 6 | item_2 1 | item_3 This table pairs the actual item sold with the seller who sold the item. When generating the commission report, calculating individual values is very simple. Calculating their commission based on their sub_sellers however is very difficult. In this example, Seller ID 1 gets a piece of every single item sold. The commission percentages indicate individual sales or the height of their commission. For example: When seller ID 6 sold one of item_2 above, the tree for commissions will look like the following: -ID 6 - 25% of cost(item_1) -ID 3 - 5% of cost(item_1) - (30% is his comm - 25% comm of seller id 6) -ID 1 - 10% of cost(item_1) - (40% is his comm - 30% of seller id 3) This must be calculated for every agent in the system from the top down (hence the DB calls within while loops throughout my enormous script). Anybody have a good suggestion or samples they may have used in the past?

    Read the article

  • jquery - clone nth row of a table?

    - by John
    I'm trying to use jquery to clone a table row everytime someone presses the add-row button. Can anyone tell me what's wrong with my code? I'm using HTML + smarty templating language in my view. Here's what my template file looks like: <table> <tr> <td>Description</td> <td>Unit</td> <td>Qty</td> <td>Total</td> <td></td> </tr> <tbody id="entries"> {foreach from=$arrItem item=i name=inv} <tr> <td> <input type="hidden" name="invoice_item_id[]" value="{$i.invoice_item_id}"/> <input type="hidden" name="assignment_id[]" value="{$i.assignment_id}" /> <input type="text" name="description[]" value="{$i.description}"/> </td> <td><input type="text" class="unit_cost" name="unit_cost[]" value="{$i.unit_cost}"/></td> <td><input type="text" class="qty" name="qty[]" value="{$i.qty}"/></td> <td><input type="text" class="cost" name="cost[]" value="{$i.cost}"/></td> <td><a href="javascript:void(0);" class="delete-invoice-item">delete</a></td> </tr> {/foreach} </tbody> <tfoot> <tr><td colspan="5"><input type="button" id="add-row" value="add row" /></td></tr> </tfoot> </table> Here's my Jquery Javascript call, which I know gets fired when I put in an alert() statement. So the problem is with me not knowing how jquery works. $('#add-row').live('click', function() {$('#entries tr:nth-child(0)').clone().appendTo('#entries');}); So what am I doing wrong?

    Read the article

  • how can i send jquery ajax http request with php

    - by testkhan
    i have following form... <form action="http://mydomain.com/get.php" method="post"> <input type="hidden" value="thisisvalue" name="hdnvalue"> <input type="text" name="cost" id="cost"><br><br> <textarea id="msg" name="message"></textarea><br><br> <input type="submit" value="Send" id="send"> </form> the get.php file will response in 1 or 0 i.e 1 means recieved and 0 means not recieved now i want to send it to http://mydomain.com/get.php via http request with jquery ajax how can i do that get the returned value..

    Read the article

  • Tests that are 2-3 times bigger than the testable code

    - by HeavyWave
    Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more). Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place. I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?

    Read the article

  • JavaScript: Can I declare a variable by querying which function is called? (Newbie)

    - by belle3WA
    I'm working with an existing JavaScript-powered cart module that I am trying to modify. I do not know JS and for various reasons need to work with what is already in place. The text that appears for my quantity box is defined within an existing function: function writeitems() { var i; for (i=0; i<items.length; i++) { var item=items[i]; var placeholder=document.getElementById("itembuttons" + i); var s="<p>"; // options, if any if (item.options) { s=s+"<select id='options"+i+"'>"; var j; for (j=0; j<item.options.length; j++) { s=s+"<option value='"+item.options[j].name+"'>"+item.options[j].name+"</option>"; } s=s+"</select>&nbsp;&nbsp;&nbsp;"; } // add to cart s=s+method+"Quantity: <input id='quantity"+i+"' value='1' size='3'/> "; s=s+"<input type='submit' value='Add to Cart' onclick='addtocart("+i+"); return false;'/></p>"; } placeholder.innerHTML=s; } refreshcart(false); } I have two different types of quantity input boxes; one (donations) needs to be prefaced with a dollar sign, and one (items) should be blank. I've taken the existing additem function, copied it, and renamed it so that there are two identical functions, one for items and one for donations. The additem function is below: function additem(name,cost,quantityincrement) { if (!quantityincrement) quantityincrement=1; var index=items.length; items[index]=new Object; items[index].name=name; items[index].cost=cost; items[index].quantityincrement=quantityincrement; document.write("<span id='itembuttons" + index + "'></span>"); return index; } Is there a way to declare a global variable based on which function (additem or adddonation) is called so that I can add that into the writeitems function so display or hide the dollar sign as needed? Or is there a better solution? I can't use HTML in the body of the cart page because of the way it is currently coded, so I'm depending on the JS to take care of it. Any help for a newbie is welcome. Thanks!

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • Why isn't INT more efficient than UNIQUEIDENTIFIER (according to the execution plan)?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation? EDIT: The question is not about which is more efficient, but why is the query execution plan showing both queries as having the same cost?

    Read the article

  • Caluculating sum of activity

    - by Maddy
    I have a table which is with following kind of information activity cost order date other information 10 1 100 -- 20 2 100 10 1 100 30 4 100 40 4 100 20 2 100 40 4 100 20 2 100 10 1 101 10 1 101 20 1 101 My requirement is to get sum of all activities over a work order ex: for order 100 1+2+4+4=11 1(for activity 10) 2(for activity 20) 4 (for activity 30) etc. i tried with group by, its taking lot time for calculation. There are 1lakh plus records in warehouse. is there any possibility in efficient way. SELECT SUM(MIN(cost)) FROM COST_WAREHOUSE a WHERE order = 100 GROUP BY (order, ACTIVITY)

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Using Nortel Netdirect on Windows 7 64bit

    - by Matt Lewis
    Does anyone know how to use Nortel Netdirect (Version 7.1.3.0) with Windows 7 64 bit (Home Premium)? There are several ways available to me for connecting, all of which work for me on a 32-bit XP machine: Nortel Contivity VPN client (v6_02.022). The installer appears to be 16-bit, so I can't even install it on a 64-bit machine. Web-based SSL via IE Web-based SSL via Firefox The Web-based SSL process is supposed to load Netdirect and start it up, establishing the VPN connection. Using Firefox, I'm able to authenticate with my smartcard, but when it tries to download the applet, the process stops with a message box saying that it couldn't download the zip file. If I run Firefox in Vista compatibility mode, it gets a little farther, and manages to start Netdirect, but then exits after notifying me that the netdirect adapter was not installed. Using IE, I'm able to authenticate with my smartcard, then the java applet starts, but dies with the following sent to the java console: load: class NetDirect not found. java.lang.ClassNotFoundException: NetDirect at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: unknown_ca at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Unknown Source) at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.recvAlert(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source) at sun.net.www.protocol.https.HttpsClient.afterConnect(Unknown Source) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Exception: java.lang.ClassNotFoundException: NetDirect I've tried installing certificates using java's keytool, but that didn't change the outcome.

    Read the article

  • syspolicy_purge_history generates failed logins

    - by jbrown414
    I have a development server with 3 instances: Default, A and B. It is a physical server, non clustered. Whenever the syspolicy_purge_history job runs at 2 am, I get failed login alerts. Looking at the job steps, all are successfully completed. It appears that some point during the step "Erase Phantom System Health Records" is when the failed logins occur. syspolicy_purge_history on instance B works OK. syspolicy_purge_history on the Default instance seems to want to connect to instance B, resulting in: Error: 18456, Severity: 14, State: 11. Login failed for user 'Machinename\sqlsvc-B'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: <local machine>] . No errors are reported by Powershell. syspolicy_purge_history on the A instance seems to want to connect to the Default instance resulting in Error: 18456, Severity: 14, State: 11. Login failed for user 'Machinename\sqlsvc-Default'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: <local machine>] . Then it tries to connect to the B instance, resulting in Error: 18456, Severity: 14, State: 11. Login failed for user 'Machinename\sqlsvc-B'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: <local machine>] . No errors are reported by Powershell. I tried the steps posted here hoping they would fix it. http://support.microsoft.com/kb/955726 But again, this is not a virtual server nor is it in a cluster. Do you have any suggestions? Thanks.

    Read the article

  • wuinstall doesn't work with winrs

    - by wizard
    I've been having issues with psexec so I've been migrating to use winrs (part of the winrm system). It's a very nice remoting tool which is proving to be more reliable then psexec. Wuinstall is used to install available windows updates. The two however don't play well together. I'm working on a verity of windows servers 2003, 2008 and 2008r2. Wuinstall behaves the same across all hosts and behaves as expected if executed locally by the same user. Command: winrs -r:server wuinstall /download Produces WUInstall.exe Version 1.1 Copyright by hs2n Informationstechnologie GmbH 2009 Visit: http://www.xeox.com, http://www.hs2n.at for new versions Searching for updates ... Criteria: IsInstalled=0 and Type='Software' Result Code: Succeeded 7 Updates found, listing all: Security Update for Windows Server 2008 R2 x64 Edition (KB2544893) Security Update for .NET Framework 3.5.1 on Windows 7 and Windows Server 2008 R2 SP1 for x64-based Systems (KB2518869) Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows S erver 2008 R2 SP1 for x64-based Systems (KB2539635) Security Update for Microsoft .NET Framework 3.5.1 on Windows 7 and Windows S erver 2008 R2 SP1 for x64-based Systems (KB2572077) Security Update for Windows Server 2008 R2 x64 Edition (KB2588516) Security Update for Windows Server 2008 R2 x64 Edition (KB2620704) Security Update for Windows Server 2008 R2 x64 Edition (KB2617657) Downloading updates ... Error occured: CreateUpdateDownloader failed! Result CODE: 0x80070005 Return code: 1 Googling "0x80070005" finds "unspecified error" which isn't helpful. Thoughts? Is there a better way?

    Read the article

  • Screen scraping software that will traverse pages

    - by nilbus
    We're creating a mashup site that pulls information from many sources all over the web. Many of these sites don't provide RSS feeds or APIs to access the information they provide. This leaves us with screen scraping as our method for collecting the data. There are many scripting tools out there written in different scripting languages for screen scraping that require you to write scraping scripts in the language the scraper was written in. Scrapy, scrAPI, and scrubyt are a few written in Ruby and Python. There are other web-based tools I've seen like Dapper that create XML or RSS feeds based on a webpage. It has a beautiful web-based interface that requires no scripting skills to use. This would be a great tool, if it were able to traverse multiple pages to gather data from hundreds pages of results. We need something that will scrape information from paginated web sites, much like scrubyt, but with a user interface that a non-programmer could use. We'll script up our own solution if we need to, probably using scrubyt, but if there's a better solution out there, we want to use it. Does anything like this exist?

    Read the article

  • Enabling syntax highlighting for LESS in Programmer's Notepad?

    - by Cody Gray
    When I don't feel like firing up the Visual Studio behemoth, or when I don't have it installed, I always turn to Programmer's Notepad. It's an amazingly light and fast little text editor, with the special advantage that it is completely platform-native and conforms to standard UI conventions. Therefore, please do not suggest that I consider using other text editors. I've already considered and rejected them because they do not use native UI controls. I like Programmer's Notepad, thank you very much. Unfortunately, I've recently begun to learn, use, and love LESS for all of my CSS coding needs, and it appears that Programmer's Notepad is not bundled with a syntax highlighting scheme for LESS. Does anyone know if there is—by chance and good fortune—one already available somewhere on the web that some kind soul has tediously prepared? If not, how can I go about writing one of my own? Is there a way to build on the existing CSS scheme? It's also possible that any code coloring scheme designed for Scintilla-based editors will work, as Programmer's Notepad is based on the Scintilla control. If you know of a LESS highlighting scheme for Scintilla-based editors, and how to use that with Programmer's Notepad, please suggest that as well.

    Read the article

  • MultiPath configuration on RHEL5 and Clariion CX-300

    - by Kamil Z
    I have problem with discovering my FC-connected CX-300 storage. Frankly speaking I'm complete novice in FibreChannel, so step by step explanation would be appreciated. My configuration consist of two IBM HS20 blades with RHEL5.4 on board and 2x Qlogic ISP2422-based 4Gb Fibre Channel HBAs on each blade. As a FC switch there are two Brocades built in BladeCenter Chassis, and finally there is EMC Clariion CX-300. CX300, and Brocade switches should be configured properly, because they were working fine with previous configuration, which main defference was RHEL3 instead RHEL5.4 Below there is my output from several usefull commands: #lspci | grep Fibre 06:01.0 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) 06:01.1 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) #lsmod | grep qla qla2xxx 1084741 0 scsi_transport_fc 37577 1 qla2xxx scsi_mod 141717 10 scsi_dh,qla2xxx,sg,scsi_transport_fc,usb_storage,libata,mptspi,mptscsih,scsi_transport_spi,sd_mod #cat /proc/scsi/scsi Attached Devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM IM Rev: 1000 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 I'd followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll - the output was empty. Do you have any suggestions about discovering FC Connected LUNs in such configuration?

    Read the article

  • How can I access a webDAV folder as a UNC share?

    - by Amar
    first of all I am just getting to know about webDAV and appreciate your patience. I have a virtual directory on IIS 6 (windows 2003) that is based on a network share on a file server different from web server. something like www.mysite.com/myreports where myreport is based on \myfileserver\reports. I have been asked by network folks to not use the UNC path like that for security reason and try to explore using webDAV. What I have been told is webDAV can give me a UNC path without requiring to open the ports required for a file server UNC path. Then I can use the new UNC path to map my virtual directory and my asp.net code will not require any change. Help: Being new to webDAV, I did some research over the web. I now can create a web folder in fileserver. I put IIS on file server as well. I can browse the content as http:://fileserver/mywebDAVreports which is based on reports folder. But I don't know how to get a UNC for this web folder to be able to map my virtual directory on web server. I appreciate any help on this. Regards, Amar

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • location of index.html CentOS 6

    - by user2118559
    Based on this http://www.servermom.com/how-to-add-new-site-into-your-apache-based-centos-server/454/ tutorial installed Apache-based CentOS Server I use putty.exe as editor vi /etc/httpd/conf/httpd.conf at very bottom modified to <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/fikitipis.com/public_html ServerName www.fikitipis.com ServerAlias fikitipis.com ErrorLog /var/www/fikitipis.com/error.log CustomLog /var/www/fikitipis.com/requests.log common </VirtualHost> So expect that index is at /var/www/fikitipis.com/public_html When in browser type ip address of server, see Apache 2 Test Page powered by CentOS and so on You may now add content to the directory /var/www/html/ Then [root@vps ~]# ls /var/www/ see cgi-bin domain.com error fikitipis.com html icons Checking content of directories ls /var/www/domain.com/public_html, ls /var/www/fikitipis.com/public_html, /var/www/html/ are empty Where is index.html? Did touch /var/www/fikitipis.com/public_html/index1.html then vi /var/www/fikitipis.com/public_html/index1.html, typed a, then wrote some text in file, then Escape and shift+zz. And in browser http://111.111.11.111/index1.html and see what I had wrote. So until now seems that all works

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >