Search Results

Search found 9034 results on 362 pages for 'plan guide'.

Page 30/362 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Step-by-step guide of Mercurial for iPhone projects?

    - by Horace Ho
    I am looking for a step-by-step Mercurial guide for iPhone projects. Please assume: hg already installed the audience is comfortable with command line operations everything is on OS X As a newbie, I am particular interested in: what files should be excluded how to exclude above files guideline/suggestion of naming builds any relevant experience to share with Please do not discuss how hg is better/worse than any other SCS. This is a how-to question, not a why question. Thanks!

    Read the article

  • Is there a definitive guide for special folders in windows?

    - by the-locster
    Is there a definitive guide for special folders in windows? An internet search yielded just a few crumbs of information, e.g. Wikipedia:Special Folders Windows 7 Client Software Logo Program What I'm looking for is an explanation of each folder, its intended purpose, usage scenarios and motivation for its existence (e.g. what does Local App settings provide for that App settings doesn't). A matrix/table of requirements/uses against folder would be handy I think.

    Read the article

  • Does anyone know of a good guide to configure GC in Java?

    - by evilpenguin
    I'm having trouble with a JVM running an app, whose heap memory looks like a comb. It's constantly jumping from 1.5 GB to 3 GB and slowly deteriorating to higher values. I'm using G1 GC algorithm, but have no idea how to configure it. I do not have access to the code of the app I'm running and, needless to say, it's a rather large app. So, bottom line, does anyone know of a good guide to configure GC in Java?

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • Android passing an arraylist back to parent activity

    - by Nicklas O
    Hi there. I've been searching for a simple example of this with no luck. In my android application I have two activities: 1. The main activity which is launched at startup 2. A second activity which is launched by pressing a button on the main activty. When the second activity is finished (by pressing a button) I want it to send back an ArrayList of type MyObject to the main activity and close itself, which the main activity can then do whatever with it. How would I go about achieving this? I have been trying a few things but it is crashing my application when I start the second activity. When the user presses button to launch second activity: Intent i = new Intent(MainActivity.this, secondactivity.class); startActivityForResult(i, 1); The array which is bundled back after pressing a button on the second activity: Intent intent= getIntent(); Bundle b = new Bundle(); b.putParcelableArrayList("myarraylist", mylist); intent.putExtras(b); setResult(RESULT_OK, intent); finish(); And finally a listener on the main activity (although I'm not sure of 100% when this code launches...) protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(resultCode==RESULT_OK && requestCode==1){ Bundle extras = data.getExtras(); final ArrayList<MyObject> mylist = extras.getParcelableArrayList("myarraylist"); Toast.makeText(MainActivity.this, mylist.get(0).getName(), Toast.LENGTH_SHORT).show(); } } Any ideas where I am going wrong? The onActivityResult() seems to be crashing my application. EDIT: This is my class MyObject, its called plan and has a name and an id import android.os.Parcel; import android.os.Parcelable; public class Plan implements Parcelable{ private String name; private String id; public Plan(){ } public Plan(String name, String id){ this.name = name; this.id = id; } public String getName(){ return name; } public void setName(String name){ this.name = name; } public String getId(){ return id; } public void setId(String id){ this.id = id; } public String toString(){ return "Plan ID: " + id + " Plan Name: " + name; } @Override public int describeContents() { // TODO Auto-generated method stub return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeString(id); dest.writeString(name); } public static final Parcelable.Creator<Plan> CREATOR = new Parcelable.Creator<Plan>() { public Plan createFromParcel(Parcel in) { return new Plan(); } @Override public Plan[] newArray(int size) { // TODO Auto-generated method stub return new Plan[size]; } }; } This is my logcat E/AndroidRuntime( 293): java.lang.RuntimeException: Unable to instantiate activ ity ComponentInfo{com.daniel.android.groupproject/com.me.android.projec t.secondactivity}: java.lang.NullPointerException E/AndroidRuntime( 293): at android.app.ActivityThread.performLaunchActiv ity(ActivityThread.java:2417) E/AndroidRuntime( 293): at android.app.ActivityThread.handleLaunchActivi ty(ActivityThread.java:2512) E/AndroidRuntime( 293): at android.app.ActivityThread.access$2200(Activi tyThread.java:119) E/AndroidRuntime( 293): at android.app.ActivityThread$H.handleMessage(Ac tivityThread.java:1863) E/AndroidRuntime( 293): at android.os.Handler.dispatchMessage(Handler.ja va:99) E/AndroidRuntime( 293): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 293): at android.app.ActivityThread.main(ActivityThrea d.java:4363) E/AndroidRuntime( 293): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 293): at java.lang.reflect.Method.invoke(Method.java:5 21) E/AndroidRuntime( 293): at com.android.internal.os.ZygoteInit$MethodAndA rgsCaller.run(ZygoteInit.java:860) E/AndroidRuntime( 293): at com.android.internal.os.ZygoteInit.main(Zygot eInit.java:618) E/AndroidRuntime( 293): at dalvik.system.NativeStart.main(Native Method) E/AndroidRuntime( 293): Caused by: java.lang.NullPointerException E/AndroidRuntime( 293): at com.daniel.android.groupproject.login.<init>( login.java:51) E/AndroidRuntime( 293): at java.lang.Class.newInstanceImpl(Native Method ) E/AndroidRuntime( 293): at java.lang.Class.newInstance(Class.java:1479) E/AndroidRuntime( 293): at android.app.Instrumentation.newActivity(Instr umentation.java:1021) E/AndroidRuntime( 293): at android.app.ActivityThread.performLaunchActiv ity(ActivityThread.java:2409) E/AndroidRuntime( 293): ... 11 more

    Read the article

  • Documentation in Oracle Retail Analytics, Release 13.3

    - by Oracle Retail Documentation Team
    The 13.3 Release of Oracle Retail Analytics is now available on the Oracle Software Delivery Cloud and from My Oracle Support. The Oracle Retail Analytics 13.3 release introduced significant new functionality with its new Customer Analytics module. The Customer Analytics module enables you to perform retail analysis of customers and customer segments. Market basket analysis (part of the Customer Analytics module) provides insight into which products have strong affinity with one another. Customer behavior information is obtained from mining sales transaction history, and it is correlated with customer segment attributes to inform promotion strategies. The ability to understand market basket affinities allows marketers to calculate, monitor, and build promotion strategies based on critical metrics such as customer profitability. Highlighted End User Documentation Updates With the addition of Oracle Retail Customer Analytics, the documentation set addresses both modules under the single umbrella name of Oracle Retail Analytics. Note, however, that the modules, Oracle Retail Merchandising Analytics and Oracle Retail Customer Analytics, are licensed separately. To accommodate new functionality, the Retail Analytics suite of documentation has been updated in the following areas, among others: The User Guide has been updated with an overview of Customer Analytics. It also contains a list of metrics associated with Customer Analytics. The Operations Guide provides details on Market Basket Analysis as well as an updated list of APIs. The program reference list now also details the module (Merchandising Analytics or Customer Analytics) to which each program applies. The Data Model was updated to include new information related to Customer Analytics, and a new section, Market Basket Analysis Module, was added to the document with its own entity relationship diagrams and data definitions. List of Documents The following documents are included in Oracle Retail Analytics 13.3: Oracle Retail Analytics Release Notes Oracle Retail Analytics Installation Guide Oracle Retail Analytics User Guide Oracle Retail Analytics Implementation Guide Oracle Retail Analytics Operations Guide Oracle Retail Analytics Data Model

    Read the article

  • How to implement Restricted access to application features

    - by DroidUser
    I'm currently developing a web application, that provides some 'service' to the user. The user will have to select a 'plan' according to which she/he will be allowed to perform application specific actions but up to a limit defined by the plan. A Plan will also limit access to certain features, which will not be available at all for some plans. As an example : say there are 3 plans, 2 actions throughout the application users in plan-1 can perform action-1 3 times, and they can't perform action-2 at all users in plan-2 can perform action-1 10 times, action-2 5 times users in plan-3 can perform action-1 20 times, action-2 10 times So i'm looking for the best way to get this done, and my main concerns besides implementing it, are the following(in no particular order) maintainability/changeability : the number of plans, and type of features/actions will change in the final product industry standard/best practice : for future readiness!! efficiency : ofcourse, i want fast code!! I have never done anything like this before, so i have no clue about how do i go about implementing these functionalities. Any tips/guides/patterns/resources/examples? I did read a little about ACL, RBAC, are they the patterns that i need to follow? Really any sort of feedback will help.

    Read the article

  • Has there ever been a great print version of Why’s Poignant Guide to Ruby?

    - by Paul D. Waite
    Although it’s probably meant to be experienced on the web, I’d love to read a great print version of Why’s Poignant Guide to Ruby. It’s liberally licensed, so I can run off a copy for myself. But I think a work like that deserves more. Full colour illustrations. Main text on the left-hand page, sidebars on the right. (Stick in a few cartoons, or a slice of onion, when there aren’t any sidebars.) A built-in MP3 player and headphones for the soundtrack, but made completely out of telephone wires. Whilst that might be wishing for too much, have you (ever) seen any decent print versions of the book available?

    Read the article

  • Guide for http://www.ch-werner.de/javasqlite java wrapper library for SQlite?

    - by Tom Nielsen
    I'm working on a large computer science school project using java and SQlite. After finding out that the zentus.org wrapper errors on databases with ON DELETE and ON UPDATE clauses set, I have changed to the other wrapper found at http://www.ch-werner.de/javasqlite. However, I find the documentation lacking somewhat when trying to get an overview on how it works and how to use it, and the function descriptions are very very short, and you have to scan through every function and somewhat guess how they work and what they do. I wasn't able to find any guides on google on how to use it. My question: Does anyone know a link for a guide or tutorial for the ch-werner.de/javasqlite wrapper, or else can give me a basic code example, or give a quick overview of querying the database and the most used functions, and how to use them?

    Read the article

  • What should layers in dotnet application ? Pleas guide me

    - by haansi
    Hi, I am using layered architecture in dotnet (mostly I work on web projects). I am confuse what layers should I use ? I have small idea that there should be the following layers. user interface customer types (custom entities) business logic layer data access layer My purpose is sure quality of work and maximum re-usability of code. some one suggested to add common types layer in it. Please guide me what should be layers ? and in each layer what part should go ? thanks for your precious time and advice. haansi

    Read the article

  • guide on how to kick start template based flash website?

    - by rizxta
    I'm very new to flash and as3 design patterns. But I can read and write as3 quite ok, i've created small widgets with that. I've developed several web sites using php and also python. Now for a educational cd-rom project i'm working on, i've basically designed all templates (A home page, a generic page with navigation and a sidebar - kind of like a wordpress blog). I have all the data for the cdrom on word files, which i intend to place on xml files. My question is what is the best way to start a project like this? Can anyone guide me to a template or something that can be used for kickstarting this? kind of like a wordpress (without the admin)? Or am i on this all wrong? Can someone please help

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • JDK 8u20 Documentation Updates

    - by joni g.
    JDK 8u20 has been released and is available from the Java Downloads page. See the JDK 8u20 Update Release Notes for details. Highlights for this release: The Medium security level has been removed. Now only High and Very High levels are available. Applets that do not conform with the latest security practices can still be authorized to run by adding the sites that host them to the Exception Site List. See Security for more information. The javafxpackager tool has been renamed to javapackager, and supports both Java and JavaFX applications. The -B option has been added to the javapackager deploy command to enable arguments to be passed to the bundlers that are used to create self-contained applications. See javapackager for Windows or Linux and OS X for information. The <fx:bundleArgument> helper parameter argument has been added to enable arguments to be passed to the bundlers when using ant tasks. See JavaFX Ant Task Reference for more information. A new attribute is available for JAR file manifests. The Entry-Point attribute is used to identify the classes that are allowed to be used as entry points to your application. See Entry-Point Attribute for more information. A new Microsoft Windows Installer (MSI) Enterprise JRE Installer, which enables users to install the JRE across the enterprise, is available for Java SE Advanced or Java SE Suite licensees. See Downloading the Installer in JRE Installation For Microsoft Windows for more information. The following new configuration parameters are added to the installation process to support commercial features, for use by Java SE Advanced or Java SE Suite licensees only: USAGETRACKERCFG= DEPLOYMENT_RULE_SET= See Installing With a Configuration File for more information about these and other installer parameters. Documentation highlights: New Troubleshooting Guide combines and replaces the Desktop Technologies Troubleshooting Guide and the HotSpot Virtual Machine Troubleshooting Guide to provide a single location for diagnosing and solving problems that might occur with Java Client applications. New Deployment Guide combines and replaces the JavaFX Deployment Guide and the Java Rich Internet Applications Guide to provide a single location for information about the Java packaging tools, creating self-contained applications, and deploying Java and JavaFX applications. New Garbage Collection Tuning Guide describes the garbage collectors included with the Java HotSpot VM and helps you choose which one to use. The Java Tutorials have a new look.

    Read the article

  • Is is possible to guide installation of new programs using %ProgramFiles%? [closed]

    - by ??????? ???????????
    The purpose of this is to have the default "program files" (32 and 64 bit) folders located under an arbitrary path, possibly on a drive separate from where windows lives. Initially I thought that this may be done using a system environment variable through the dialog located under Control Panel - System - Advanced - Environment Variables. These variables turned out to be set in the registry under the key HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion. However, one particular entry is confusing. The ProgramFilesPath entry seems to point at an environment variable that is not defined under the same registry key. I could assume that the difference between ProgramFilesDir and ProgramFilesPath is none and that one of them exists as a backwards compatibility, but having some legitimate resource from Microsoft to look at would be better than guessing. After receiving some worrying feedback about having both 32 and 64bit applications in the same folder, I have decided not to ask about the feasibility of this to avoid discussion. The real question is if the desired effect is possible to attain by "cutting into" the windows setup process and modifying those registry entries as early as possible. These settings should be system wide and not only for software installed by a particular user. If this is indeed something that can be done, I wonder if there are any subtle pitfalls. Programs that expect libraries and other resources to be in default locations can probably be dealt with using the same technique as employed by Windows to re-map the "Documents and Settings" folders and the like (i.e. breaking legacy applications is not real concern).

    Read the article

  • What's a good plugin or guide I can use to create javascript widgets from a Rails App?

    - by nicosuria
    I've been looking around for a while now and the best guide I've seen so far is Dr Nic's DIY widgets how to (here). I haven't been able to make something like this work: Assuming this is my widget code: <script src="http://mysite/nomnoms.js"> </script> And my nomnoms controller looks like (assume that the list partial exists and simply lists down a link to the show page of each nomnom in the @nomnoms variable): class NomnomsController < ApplicationController def index @nomnoms = Nomnom.find(:all) @content = render_to_string(:partial => 'list') end end And in the index.js of my nomnoms_controller I have: page << "document.write('<div>'" page << "document.write('#{@content.to_json}')" page << "</div>" The above setup doesn't render anything :(. But when I change the second line of index.js to: page << "document.write('nomnoms should be here') ...the widget renders the text. Any help or even a point in the right direction would be greatly appreciated. Thanks in advance.

    Read the article

  • a basic issue in implementing validations through properties ? Please guide me.

    - by haansi
    hello, thanks for your attention and time. I want to implement validations in settter of properties. Here is an issue where your expert help is required please. I have idea of how I will do validations before setting value. but not getting what to do if passed value is not correct. Just not setting is not a acceptable solution as I want to return an appropriate message to user (in a label in web form). My example code is: private int id; public int Id { get { return id; } set { bool result = IsNumber(value); if (result==false) { // What to do if passed data is not valid ? how to give a appropriate message to user that what is wrong ? } id = value; } } A thought was to use return but it is not allowed. Throwing error looks not good as generally we avoid thorwing custom errors. Please guide and help me. thanks in anticipation haansi

    Read the article

  • Cardinality Estimation Bug with Lookups in SQL Server 2008 onward

    - by Paul White
    Cost-based optimization stands or falls on the quality of cardinality estimates (expected row counts).  If the optimizer has incorrect information to start with, it is quite unlikely to produce good quality execution plans except by chance.  There are many ways we can provide good starting information to the optimizer, and even more ways for cardinality estimation to go wrong.  Good database people know this, and work hard to write optimizer-friendly queries with a schema and metadata (e.g. statistics) that reduce the chances of poor cardinality estimation producing a sub-optimal plan.  Today, I am going to look at a case where poor cardinality estimation is Microsoft’s fault, and not yours. SQL Server 2005 SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; The query plan on SQL Server 2005 is as follows (if you are using a more recent version of AdventureWorks, you will need to change the year on the date range from 2003 to 2007): There is an Index Seek on ProductID = 1, followed by a Key Lookup to find the Transaction Date for each row, and finally a Filter to restrict the results to only those rows where Transaction Date falls in the range specified.  The cardinality estimate of 45 rows at the Index Seek is exactly correct.  The table is not very large, there are up-to-date statistics associated with the index, so this is as expected. The estimate for the Key Lookup is also exactly right.  Each lookup into the Clustered Index to find the Transaction Date is guaranteed to return exactly one row.  The plan shows that the Key Lookup is expected to be executed 45 times.  The estimate for the Inner Join output is also correct – 45 rows from the seek joining to one row each time, gives 45 rows as output. The Filter estimate is also very good: the optimizer estimates 16.9951 rows will match the specified range of transaction dates.  Eleven rows are produced by this query, but that small difference is quite normal and certainly nothing to worry about here.  All good so far. SQL Server 2008 onward The same query executed against an identical copy of AdventureWorks on SQL Server 2008 produces a different execution plan: The optimizer has pushed the Filter conditions seen in the 2005 plan down to the Key Lookup.  This is a good optimization – it makes sense to filter rows out as early as possible.  Unfortunately, it has made a bit of a mess of the cardinality estimates. The post-Filter estimate of 16.9951 rows seen in the 2005 plan has moved with the predicate on Transaction Date.  Instead of estimating one row, the plan now suggests that 16.9951 rows will be produced by each clustered index lookup – clearly not right!  This misinformation also confuses SQL Sentry Plan Explorer: Plan Explorer shows 765 rows expected from the Key Lookup (it multiplies a rounded estimate of 17 rows by 45 expected executions to give 765 rows total). Workarounds One workaround is to provide a covering non-clustered index (avoiding the lookup avoids the problem of course): CREATE INDEX nc1 ON Production.TransactionHistory (ProductID) INCLUDE (TransactionDate); With the Transaction Date filter applied as a residual predicate in the same operator as the seek, the estimate is again as expected: We could also force the use of the ultimate covering index (the clustered one): SELECT th.ProductID, th.TransactionID, th.TransactionDate FROM Production.TransactionHistory AS th WITH (INDEX(1)) WHERE th.ProductID = 1 AND th.TransactionDate BETWEEN '20030901' AND '20031231'; Summary Providing a covering non-clustered index for all possible queries is not always practical, and scanning the clustered index will rarely be optimal.  Nevertheless, these are the best workarounds we have today. In the meantime, watch out for poor cardinality estimates when a predicate is applied as part of a lookup. The worst thing is that the estimate after the lookup join in the 2008+ plans is wrong.  It’s not hopelessly wrong in this particular case (45 versus 16.9951 is not the end of the world) but it easily can be much worse, and there’s not much you can do about it.  Any decisions made by the optimizer after such a lookup could be based on very wrong information – which can only be bad news. If you think this situation should be improved, please vote for this Connect item. © 2012 Paul White – All Rights Reserved twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • How to get Augmented Reality: A Practical Guide examples working?

    - by Glen
    I recently bought the book: Augmented Reality: A Practical Guide (http://pragprog.com/titles/cfar/augmented-reality). It has example code that it says runs on Windows, MacOS and Linux. But I can't get the binaries to run. Has anyone got this book and got the binaries to run on ubuntu? I also can't figure out how to compile the examples in Ubuntu. How would I do this? Here is what it says to do: Compiling for Linux Refreshingly, there are no changes required to get the programs in this chapter to compile for Linux, but as with Windows, you’ll first have to find your GL and GLUT files. This may mean you’ll have to download the correct version of GLUT for your machine. You need to link in the GL, GLU, and GLUT libraries and provide a path to the GLUT header file and the files it includes. See whether there is a glut.h file in the /usr/include/GL directory; otherwise, look elsewhere for it—you could use the command find / -name "glut.h" to search your entire machine, or you could use the locate command (locate glut.h). You may need to customize the paths, but here is an example of the compile command: gcc -o opengl_template opengl_template.cpp -I /usr/include/GL -I /usr/include -lGL -lGLU -lglut gcc is a C/C++ compiler that should be present on your Linux or Unix machine. The -I /usr/include/GL command-line argument tells gcc to look in /usr/include/GL for the include files. In this case, you’ll find glut.h and what it includes. When linking in libraries with gcc, you use the -lX switch—where X is the name of your library and there is a correspond- ing libX.a file somewhere in your path. For this example, you want to link in the library files libGL.a, libGLU.a, and libglut.a, so you will use the gcc arguments -lGL -lGLU -lglut. These three files are found in the default directory /usr/lib/, so you don’t need to specify their location as you did with glut.h. If you did need to specify the library path, you would add -L to the path. To run your compiled program, type ./opengl_template or, if the current directory is in your shell’s paths, just opengl_template. When working in Linux, it’s important to know that you may need to keep your texture files to a maximum of 256 by 256 pixels or find the settings in your system to raise this limit. Often an OpenGL program will work in Windows but produce a blank white texture in Linux until the texture size is reduced. The above instructions make no sense to me. Do I have to use gcc to compile or can I use eclipse? If I use either eclipse or gcc what do I need to do to compile and run the program?

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >