Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 32/387 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Ceská obchodní banka, a.s. Upgrades to Oracle Database 11g On Time, On Budget and without Disrupting Business Operations

    - by jgelhaus
    You want the new features of the latest release, but upgrading a database is one of those things DBAs can "lose sleep" over.  Ceská obchodní banka, a.s."CSOB" needed to upgrade its production systems in the Czech Republic and Slovakia that supported 90 key applications for its retail, corporate, internet, and ATM services from Oracle Database 9i to Oracle Database 11g with simultaneous migration from Alpha processors/OpenVMS-based hardware to a Power7, AIX system. Oracle Consulting helped to complete the upgrade within schedule and budget, while meeting tight restrictions on downtime. Knowledge transfer by Oracle Consulting to the bank’s IT team has improved self-sufficiency in support and maintenance while the technical and advisory services of Oracle Consulting Expert Services continue to optimize performance and availability while lowering cost of ownership. Read how CSOB maximized the value of its investment in Oracle Database technology with an upgrade to Oracle Database 11g.

    Read the article

  • Do you have a plan for your digital assets after you die?

    - by pablo
    After reading this question I remembered of a news article about some websites that manage your online identity after you pass away. Have you planned what to do with your digital assets once you go? I'd imagine that your online footprint is as important as anything you leave of material value. I mean, what would be the difference of that open-source project that you created to the money and savings that you had? How would you like to have your identity managed after you pass away? Would you prefer to go "off the grid"? It's a sensitive topic and I never met anyone who prepared for it.

    Read the article

  • How should I plan the inheritance structure for my game?

    - by Eric Thoma
    I am trying to write a platform shooter in C++ with a really good class structure for robustness. The game itself is secondary; it is the learning process of writing it that is primary. I am implementing an inheritance tree for all of the objects in my game, but I find myself unsure on some decisions. One specific issue that it bugging me is this: I have an Actor that is simply defined as anything in the game world. Under Actor is Character. Both of these classes are abstract. Under Character is the Philosopher, who is the main character that the user commands. Also under Character is NPC, which uses an AI module with stock routines for friendly, enemy and (maybe) neutral alignments. So under NPC I want to have three subclasses: FriendlyNPC, EnemyNPC and NeutralNPC. These classes are not abstract, and will often be subclassed in order to make different types of NPC's, like Engineer, Scientist and the most evil Programmer. Still, if I want to implement a generic NPC named Kevin, it would nice to be able to put him in without making a new class for him. I could just instantiate a FriendlyNPC and pass some values for the AI machine and for the dialogue; that would be ideal. But what if Kevin is the one benevolent Programmer in the whole world? Now we must make a class for him (but what should it be called?). Now we have a character that should inherit from Programmer (as Kevin has all the same abilities but just uses the friendly AI functions) but also should inherit from FriendlyNPC. Programmer and FriendlyNPC branched away from each other on the inheritance tree, so inheriting from both of them would have conflicts, because some of the same functions have been implemented in different ways on the two of them. 1) Is there a better way to order these classes to avoid these conflicts? Having three subclasses; Friendly, Enemy and Neutral; from each type of NPC; Engineer, Scientist, and Programmer; would amount to a huge number of classes. I would share specific implementation details, but I am writing the game slowly, piece by piece, and so I haven't implemented past Character yet. 2) Is there a place where I can learn these programming paradigms? I am already trying to take advantage of some good design patterns, like MVC architecture and Mediator objects. The whole point of this project is to write something in good style. It is difficult to tell what should become a subclass and what should become a state (i.e. Friendly boolean v. Friendly class). Having many states slows down code with if statements and makes classes long and unwieldy. On the other hand, having a class for everything isn't practical. 3) Are there good rules of thumb or resources to learn more about this? 4) Finally, where does templating come in to this? How should I coordinate templates into my class structure? I have never actually taken advantage of templating honestly, but I hear that it increases modularity, which means good code.

    Read the article

  • Any mates have same plan as me. Focust on tech whole life, no wife, no kids [closed]

    - by Anders Lind
    I am about 30 years old, c++ programmer. Kernel hacker. Living in east coast US. Day by day, night by night. I am in front of my monitor, typing code on my hhkb, scratching idea on my notebook. In my spare time, I play piano sometimes. Go to some classical concert once a month. Basically having a happy life. One concern is I dont have gf. I dont have wife nor kids. My parents start to worry about this. Occasionally they will ask my status. they wont tell me to do anything. But I can see their worries. So, my question is, is my life normal? How many mates think same as me? (I only know rms is single, has no kids, and having a happy life. But I am way worse than him. compare to him, I am nothing. If I am as successful as him. I won;t ask this question here.)

    Read the article

  • Are there any additional considerations to make when designing a site structure if you plan to use persistent connection technologies?

    - by Psytronic
    As the title states, I'm thinking of making a simple-card-game based website, using persistent connection technology (Something like signalR) for the actual game part of it. I've never planned a site to use this technology, and wondering for those who have, are there any additional things that need to be taken into consideration for the site structure? I'm planning on using the asp MVC framework for the whole thing, and starting off with some simple game (e.g. card based Rock/Paper/Scissors) for proof of concept (to see if I can get it working how I think it would in my head).

    Read the article

  • What software do you use to help plan your team work, and why?

    - by Alex Feinman
    Planning is very difficult. We are not naturally good at estimating our own future, and many cognitive biases exacerbate the problem. Group planning is even harder. Incomplete information, inconsistent views of a situation, and communication problems compound the difficulty. Agile methods provide one framework for organizing group planning--making planning visible to everyone (user stories), breaking it into smaller chunks (sprints), and providing retrospective analysis so you get better at planning. But finding good tools to support these practices is proving tricky. What software tools do you use to achieve these goals? Why are you using that tool? What successes have you had with a particular tool?

    Read the article

  • How to had operation with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can had a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can had some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have try to set binary flags to states, and use AND to set operation to combine different states, checking before if is possible or not to do it, or change to another status. Exist a concrete patron to solve this problem efficiently without had a interminable switch that check every states with everynew states? It is relative easy to check 2 different states, but if exist a third state it is not trivial to do.

    Read the article

  • Why am I not able to create a backup plan for TFS?

    - by noocyte
    I am trying to create a backup plan using the TFS Power Tools but I keep running into this error message: I have checked that the account has Full Control on the share, I can edit, create and delete files there. From the log: [Info @07:15:00.403] Starting creating backup test validation [Error @07:15:00.700] Microsoft.SqlServer.Management.Smo.FailedOperationException: Backup failed for Server 'WMSI003714N\SqlExpress'. ---> Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. ---> System.Data.SqlClient.SqlException: Cannot open backup device '\\wmsi003714n\sql dump\Tfs_Configuration_20100910091500.bak'. Operating system error 5(failed to retrieve text for this error. Reason: 1815). BACKUP DATABASE is terminating abnormally. at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(StringCollection sqlCommands, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Smo.ExecutionManager.ExecuteNonQuery(StringCollection queries) at Microsoft.SqlServer.Management.Smo.BackupRestoreBase.ExecuteSql(Server server, StringCollection queries) at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) at Microsoft.TeamFoundation.PowerTools.Admin.Helpers.BackupFactory.TestBackupCreation(String path) [Error @07:15:00.731] !Verify Error!: Account GROUPINFRA\SA-NO-TeamService failed to create backups using path \\wmsi003714n\sql dump [Info @07:15:00.731] "Verify: Grant Backup Plan Permissions\Root\VerifyDummyBackupCreation(VerifyTestBackupCreatedSuccessfully): Exiting Verification with state Completed and result Error" Any ideas?

    Read the article

  • How can you plan long range resources and budgets when using Agile methodology?

    - by Mystere Man
    Agile does not encourage a lot of up-front design. This is good from a requirements management and software development standpoint, and allows the project to adapt to changing business needs. However, how does one do any long range planning of resources if you don't really know what you're going to build when you start? Oh sure, you have a conceptual model of what you're going to build, but you don't have any measurable detail from which to gague how many resources you will need to complete the project, or how much it will cost. Does anyone have any suggestions on how to go about long range planning in an agile environment?

    Read the article

  • 1 year to learn as much as possile - How would you plan this time?

    - by user1189880
    I have been messing around with web development and programming in general for a couple of years now, working in web development agencies and the like. I have now decided that I want to move to more general programming and do this permanently and as a career and have set myself a goal of 1 year to learn as much as I can before I go out and find a 'proper' job as a programmer. Do any programmers out there have any opinions on how this time should be split and what the most important things to focus on will be over the year. The languages I will be focusing my learning on are: c, php, python and go - all of which i have varying degrees of familiarity with. The ultimate goal here is to gain as good as foundation as possible and to be of a good enough level to interview successfully for a decent company.

    Read the article

  • Are there specific benefits to using XNA for 2D development if you don't plan on releasing on xbox/windows phone?

    - by ssb
    I've been using XNA for a while to tinker with 2D game development, but I can't help but feel constrained by the content pipeline when targeting PC only. Things like no vector fonts or direct use of graphics files make it a pain while other frameworks do these things with no problem. I like XNA because it's robust and has a lot of support, but what are the specific benefits that I'd get developing exclusively for PC, if there are any at all?

    Read the article

  • Is it a good plan to use 2D physics for a 3D racing game?

    - by user3195897
    I am working on a 3D racing game using SDL and OpenGL. I thought it would be easier to use a 2D physics engine, since I really don't need the 3rd dimension. There will be no flying cars or jumps, they will just be stuck to the floor, so I would use 2D colliders and that things to simulate collisions in a plane but render the actual game from a 3D perspective. So the real question is: is it possible, is it a dumb idea, what else can I do?

    Read the article

  • Do you gain any operations when you constrain a generic type using where T : struct?

    - by Fiona Holder
    This may be a bit of an abstract question, so apologies in advance. I am looking into generics in .NET, and was wondering about the where T : struct constraint. I understand that this allows you to restrict the type used to be a value type. My question is, without any type constraint, you can do a limited number of operations on T. Do you gain the ability to use any additional operations when you specify where T : struct, or is the only value in restricting the types you can pass in?

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • DDD: Aggregate Roots

    - by Mosh
    Hello, I need help with finding my aggregate root and boundary. I have 3 Entities: Plan, PlannedRole and PlannedTraining. Each Plan can include many PlannedRoles and PlannedTrainings. Solution 1: At first I thought Plan is the aggregate root because PlannedRole and PlannedTraining do not make sense out of the context of a Plan. They are always within a plan. Also, we have a business rule that says each Plan can have a maximum of 3 PlannedRoles and 5 PlannedTrainings. So I thought by nominating the Plan as the aggregate root, I can enforce this invariant. However, we have a Search page where the user searches for Plans. The results shows a few properties of the Plan itself (and none of its PlannedRoles or PlannedTrainings). I thought if I have to load the entire aggregate, it would have a lot of overhead. There are nearly 3000 plans and each may have a few children. Loading all these objects together and then ignoring PlannedRoles and PlannedTrainings in the search page doesn't make sense to me. Solution 2: I just realized the user wants 2 more search pages where they can search for Planned Roles or Planned Trainings. That made me realize they are trying to access these objects independently and "out of" the context of Plan. So I thought I was wrong about my initial design and that is how I came up with this solution. So, I thought to have 3 aggregates here, 1 for each Entity. This approach enables me to search for each Entity independently and also resolves the performance issue in solution 1. However, using this approach I cannot enforce the invariant I mentioned earlier. There is also another invariant that states a Plan can be changed only if it is of a certain status. So, I shouldn't be able to add any PlannedRoles or PlannedTrainings to a Plan that is not in that status. Again, I can't enforce this invariant with the second approach. Any advice would be greatly appreciated. Cheers, Mosh

    Read the article

  • iPhone surfing via USB... without wifi or data plan?

    - by Philipp Lenssen
    I bought an iPhone in China, where it is manufactured without Wifi. (I would have to switch carriers to sign up for a data plan as my current Chinese carrier doesn't support surfing either... if possible I want to avoid getting yet another card though.) Can I somehow surf with the iPhone Safari while USB-connected to my net-enabled laptop? Thanks!

    Read the article

  • HTC Desire Data Plan Only in Australia. Possible?

    - by James
    I am moving to Australia, and want to get an HTC Desire and ideally pay for a data plan only, using skype for my calls. Is this possible with one of the providers down there and what will it ultimately cost including skype and all fees per month? Are there cheaper alternatives?

    Read the article

  • When Intel/AMD plan to use new CPU sockets? [closed]

    - by psihodelia
    It is very expensive always to use most modern hardware especially buying new mainboard if only a new CPU is desired. It would be much better if one knows whether and when major CPU producers plan to change CPU sockets. Do you know when it is planed to change sockets the next time? I am particularly interested in not buying Intel i7 CPU if a new CPU will be released soon with not compatible pins.

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >