Search Results

Search found 1714 results on 69 pages for 'optimizer hints'.

Page 45/69 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • "There was an internal API error." while running an app on any iPhone/iPod-touch device

    - by Martin Cowie
    I am in the process of submitting an iPhone app to the app store. While making the final touches to the app I was in the process of compiling and running the app on my iPhone when I got the message ... "There was an internal API error." The console had this to say ... 25/08/2010 10:10:54 Xcode[3556] Failed willExecute: Error Domain=com.apple.platform.iphoneos Code=0 UserInfo=0x2011adec0 "There was an internal API error." -- { NSLocalizedDescription = "There was an internal API error."; NSLocalizedFailureReason = ""; NSLocalizedRecoverySuggestion = ""; } The problem is specific to this project, others projects don't suffer the same problem. The same problem exhibits when moved to another machine, or another mobile device is swapped in. I should be most grateful for any hints or ideas on the subject ...

    Read the article

  • Apache crashes after installing mysqli

    - by Marco P.
    System: Apache 2.2 running on Windows 2008 Server with PHP 5.2.17 VC6 Thread-Safe as a Module and MySQL 5.5.17 - all working fine. After installing mysqli using the php package, Apache won't start anymore. There is no error message in the log. What I have tried: Make sure Windows PATH points to libmysql.dll: Yes, done. Make sure extension_dir points to the right point: Yes. Other extensions load fine. Try without mysqli: Yes, Apache loads fine then. Try without mysql: Yes, does not help. Test mysql itself: Restarts server! Overwrite libmysql: Yes, does not help. It seems to me that there is some general problem with MySql, but the DB server seems to be running fine. I'm really out of ideas of things I could try, so I'm desperate for any hints or tricks.

    Read the article

  • Common optimization rules

    - by mafutrct
    This is a dangerous question, so let me try to phrase it correctly. Premature optimization is the root of all evil, but if you know you need it, there is a basic set of rules that should be considered. This set is what I'm wondering about. For instance, imagine you got a list of a few thousand items. How do you look up an item with a specific, unique ID? Of course, you simply use a Dictionary to map the ID to the item. And if you know that there is a setting stored in a database that is required all the time, you simply cache it instead of issuing a database request hundred times a second. I guess there are a few even more basic ideas. I am specifically not looking for "don't do it, for experts: don't do it yet" or "use a profiler" answers, but for really simple, general hints. If you feel this is an argumentative question, you probably misunderstood my intention.

    Read the article

  • Using a framework in a PreferencePane

    - by Jonathan
    Hi, i am currently trying to implement a "third party framework" (FeedbackReporter.Framework) into my preferencepane. Unfortunately I am getting the following error all the time when trying to launch my preference pane: 16.05.10 23:13:30 System Preferences[32645] dlopen_preflight failed with dlopen_preflight(/Users/me/Library/PreferencePanes/myPane.prefPane/Contents/MacOS/myPane): Library not loaded: @executable_path/../Frameworks/FeedbackReporter.framework/Versions/A/FeedbackReporter Referenced from: /Users/me/Library/PreferencePanes/myPane.prefPane/Contents/MacOS/myPane Reason: image not found for /Users/me/Library/PreferencePanes/myPane.prefPane As far as I read so far, this problem is probably caused because my prefPane is no actual app, but a "plugin" of "System Settings.app" and thus @executable_path resolves to a path within the bundle of this app, instead of the bundle of my prefpane. But I don't really picked up howto fix this problem. I guess it must be fairly easy since it should be a usual case that people use non-apple-frameworks in PreferencePanes. Thanks for your hints!

    Read the article

  • Shell Sort problem

    - by user191603
    Show the result of running Shell Sort on the input 9,8,7,6,5,4,3,2,1 using increments { 1,3,7 }. I have done this part. the result is: 9 8 7 6 5 4 3 2 1 (original) 2 1 7 6 5 4 3 9 8 ( 7-sort ) 2 1 4 3 5 7 6 9 8 ( 3-sort ) 1 2 3 4 5 6 7 8 9 ( 1-sort ) Then the question requires me to determine the running time of Shell Sort using Shell's increments of N/2, N/4, ..., 1 for sorted input. I am not quite sure how to answer the second question as I don't understand the requirement of this question. So, would anyone give some hints to let me finish this question? Thank you for your help first!

    Read the article

  • FireFox Toolbar Open Window in new Tab

    - by Mark
    Hi all, I have a Toolbarbutton <toolbarbutton context="TabMenue" id="esbTb_rss_reader" label="News" type="menu"> with a Context menue which comes up when the button is right clickt <menupopup id="TabMenue" > <menuitem label="New Tab" oncommand="esbTb_loadURLNewTab()"/> </menupopup> so this function should open the new window in a new tab function esbTb_loadURLNewTab() { window.open(ClickUrl,'name'); } I don't get it working that the new Window is showing up in a new tab it always opens a new firefox window. I tried also like described in this article to set the browser.link.open_newwindow and browser.link.open_newwindow.restriction preferences but that doesn't bringt anything. And I tried it with all Target attributes which came in my mind. So I'm grateful for any hints, tips what ever this is driving me crazy...

    Read the article

  • Calling R Script from within C-Code

    - by tiny81
    Hi, Is there a way to call an R-Script within C-Code? I did find the R Api for C (chaper 6. of the 'Writing R Extensions' manual), but as far as I understood this does "only" allow to call the C-Implementation of R. Of cause I could call the R-Script via shell, but that's no solution for me, since this does not allow proper passing of data (at least no if I don't what to write the data into a Csv-File or something like this). Is there a easy way of using the R to C parser beforehand? Any hints you can give me? regards, Tiny

    Read the article

  • Application file (Real world example)

    - by aalhamad
    Looking for guidelines to create application file. For example I have an application that store user input into a file (Textbox, DataGrid, ListBox etc). I'm looking for WPF-C# implementation. I would like to have the following: If user edit a any form(Textbox, etc) an asterisk is displayed to the window title. When window is closed and asterisk is still there, a promote "Would you like to save changes" appears. If then saved the asterisk disappear. What do real applications use to create their application file? (Note: I'm not looking for database saving or SQL) I'm just looking for hints and guidelines. Thank you.

    Read the article

  • Java Int Array - Number being stored are different from ones specified

    - by Danadir
    Hey there, I have encountered the most weird problem ever in Java. When I am doing this: int []s = new int [5]; s[0] = 026; s[1] = 0011; s[2] = 1001; s[3] = 0026; s[4] = 1101; the numbers being stored in the array, seen from debug mode are different i.e. the numbers stored are 22,9,1001,22,1101. Can you give me some hints what might be wrong with this? Thanks in advance

    Read the article

  • Implemention of scattering in this paper

    - by pelb
    Hi, For some time I tried implementing the scattering model described in this paper: Salama Paper The BRDF part of this paper is pretty clear and straightforward. Where I am hanging is the scattering part. When scattering at the first hitpoint once only, I'm just not getting the same results as proposed by the images in the very same paper. Of course I'm scattering multiply rays inside the proposed scattering cone and averaging the results of the different rays. Does anyone have more or less concrete ideas on the implementation of this nice and beautiful illustrative rendering method. I am glad and thankful for any hints.

    Read the article

  • Search box inside a div. Django

    - by Juliette Dupuis
    In my django app I display a list of elements (friends name) thanks to a loop: <div> {% for friend in group %} <p>{{ friend.name }} <p> {% endfor %} </div> I would like to create a search box on the top of my list in order to be able to find only the friends the user wants. I would like the search bar does not need to click to send the request (an example is the Airtime searchbox on top of the facebook friends list). I have absolutely no idea on how to do that, and I'm looking for hints or tips to start. Thank you very much for your help.

    Read the article

  • [cli/linux] get plain text from raw emails (not attachments extraction)

    - by etuardu
    Hi, having a raw email as input (i.e. the text between "DATA" and "." sent by a smtp client) I need to extract the mail content (which I know is always text only) as plain text. This means decoding transfer encoding (if any: could be base64 or quoted-printable), merging mutiparts (if any), and stripping headers. I tried various tools that would do that: mewdecode, uudecode, uudeview... I only managed to get this last one to work, but it won't output anything if the mail is not MIME encoded and it stores its output in an unpredictable (nor forceable) filename, so it's hard to use it in a not-interactive shell script. Since this is a pretty common job (every mail client have to do that), it's weird it's so complicated. Do you have some hints? (Actually, forcing uudeview to output in a certain file would be good enough). Thank you!

    Read the article

  • [VC++ 2010] Stack around the variable 'xyz' was corrupted.

    - by tirolerhut
    hi, I'm trying to get some simple piece of code I found on a website to work in VC++ 2010 on windows vista 64: #include "stdafx.h" #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { DWORD dResult; BOOL result; char oldWallPaper[MAX_PATH]; result = SystemParametersInfo(SPI_GETDESKWALLPAPER, sizeof(oldWallPaper)-1, oldWallPaper, 0); fprintf(stderr, "Current desktop background is %s\n", oldWallPaper); return 0; } it does compile, but when I run it, I always get this error: Run-Time Check Failure #2 - Stack around the variable 'oldWallPaper' was corrupted. I'm not sure what is going wrong, but I noticed, that the value of oldWallPaper looks something like "C\0\:\0\0U\0s\0e\0r\0s[...]" -- I'm wondering where all the \0s come from. A friend of mine compiled it on windows xp 32 (also VC++ 2010) and is able to run it without problems any clues/hints/opinions? thanks

    Read the article

  • Select products with users

    - by Ploppe
    I have not worked with SQL for quite a long time, and I need some help for a basic query. I have the three following tables: users (id, name) products (id, name) owners (userid, productid, date) One product can be sold by user A to user B and then back to A. Now, I want the list of all products currently owned by every single user with the date of transaction. Currently, my query is this one, but I'm stuck with old data (first association of one product to one user, and not the newest one): SELECT users.name, products.name, date FROM products JOIN owners ON products.id = owners.id JOIN users ON owners.id = user.id GROUP BY product.id Do you have some hints? Thanks

    Read the article

  • Get the folder where the last mailitem was moved in Outlook?

    - by user2892971
    I have a vbscript macro that I'm using in Outlook. It moves a mailitem to some folder, say X. After I run the macro and I try to manually move a mailitem from Outlook with Control-v, it defaults to folder X. I would like Control-v to default to the folder that it would have used before I ran the macro. Is there some way in VBScript to find out what folder the last mailitem was move to, and to return that to be the default folder after I run my script? Or is there a way to move a mailitem in my script without the destination folder being remembered by Outlook Control-v after I run the script? Thanks for any hints.

    Read the article

  • NSMutableArray with NSDictionary to Labels without the surrounding brackets

    - by bigubosu
    I'm making a scoreboard for my game. And when I do a NSLog of the data it comes out as this: { name = TTY; score = "3.366347"; } So my question is how do I remove this brackets and also just grab the name (TTY) and score (3.36 without the quotations) and place them in variable for me to put into labels. Currently I can place them in labels but they have the lingering curly braces "{" and "}". Any hints would be helpful as I'm happy to search further I just don't know the vocab to search for it. thanks

    Read the article

  • can we connect more than two devices with bluetooth

    - by menuka devinda
    can we connect more than two devices with blue-tooth to play a game. I am totally new to Android. I was thinking to write application for card game that play 4 players. I heard that blue-tooth communicate only between 2 mobiles. I want to dig the background before I continue. That's why I ask this. I appreciate your ideas about this and any hints to develop this application also are appreciated. thanks in advance.

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • What makes them click ?

    - by Piet
    The other day (well, actually some weeks ago while relaxing at the beach in Kos) I read ‘Neuro Web Design - What makes them click?’ by Susan Weinschenk. (http://neurowebbook.com) The book is a fast and easy read (no unnecessary filler) and a good introduction on how your site’s visitors can be steered in the direction you want them to go. The Obvious The book handles some of the more known/proven techniques, like for example that ratings/testimonials of other people can help sell your product or service. Another well known technique it talks about is inducing a sense of scarcity/urgency in the visitor. Only 2 seats left! Buy now and get 33% off! It’s not because these are known techniques that they stop working. Luckily 2/3rd of the book handles less obvious techniques, otherwise it wouldn’t be worth buying. The Not So Obvious A less known influencing technique is reciprocity. And then I’m not talking about swapping links with another website, but the fact that someone is more likely to do something for you after you did something for them first. The book cites some studies (I always love the facts and figures) and gives some actual examples of how to implement this in your site’s design, which is less obvious when you think about it. Want to know more ? Buy the book! Other interesting sources For a more general introduction to the same principles, I’d suggest ‘Yes! 50 Secrets from the Science of Persuasion’. ‘Yes!…’ cites some of the same studies (it seems there’s a rather limited pool of studies covering this subject), but of course doesn’t show how to implement these techniques in your site’s design. I read ‘Yes!…’ last year, making ‘Neuro Web Design’ just a little bit less interesting. !!!Always make sure you’re able to measure your changes. If you haven’t yet, check out the advanced segmentation in Google Analytics (don’t be afraid because it says ‘beta’, it works just fine) and Google Website Optimizer. Worth Buying? Can I recommend it ? Sure, why not. I think it can be useful for anyone who ever had to think about the design or content of a site. You don’t have to be a marketing guy to want a site you’re involved with to be successful. The content/filler ratio is excellent too: you don’t need to wade through dozens of pages to filter out the interesting bits. (unlike ‘The Design of Sites’, which contains too much useless info and because it’s in dead-tree format, you can’t google it) If you like it, you might also check out ‘Yes! 50 Secrets from the Science of Persuasion’. Tip for people living in Europe: check Amazon UK for your book buying needs. Because of the low UK Pound exchange rate, it’s usually considerably cheaper and faster to get a book delivered to your doorstep by Amazon UK compared to having to order it at the local book store or web-shop.

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >