Search Results

Search found 3474 results on 139 pages for 'prepared statements'.

Page 14/139 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Syntax for "RETURNING" clause in Mysql PDO

    - by dmontain
    I'm trying to add a record, and at the same time return the id of that record added. I read it's possible to do it with a RETURNING clause. $stmt->prepare("INSERT INTO tablename (field1, field2) VALUES (:value1, :value2) RETURNING id"); but the insertion fails when I add RETURNING. There is an auto-incremented field called id in the table being added to. Can someone see anything wrong with my syntax? or maybe PDO does not support RETURNING?

    Read the article

  • Rails: find_by_sql and PREPARE stmt FROM... i cant make it work

    - by Totty
    I have this query and I have an error: images = Image.find_by_sql('PREPARE stmt FROM \' SELECT * FROM images AS i WHERE i.on_id = 1 AND i.on_type = "profile" ORDER BY i.updated_at LIMIT ?, 6\ '; SET @lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING @lower_limit;') Mysql::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SET @lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING @lower_limit' at line 1: PREPARE stmt FROM ' SELECT * FROM images AS i WHERE i.on_id = 1 AND i.on_type = "profile" ORDER BY i.updated_at LIMIT ?, 6'; SET @lower_limit := ((5 DIV 6) * 6); EXECUTE stmt USING @lower_limit; EDIT: after finding the answer to: http://stackoverflow.com/questions/2566620/mysql-i-need-to-get-the-offset-of-a-item-in-a-query i need another help here to port it to rails.

    Read the article

  • PreparedStatement and setTimestamp in oracle jdbc

    - by Roman
    Hi everyone, I am using PreparedStatement with Timestamp in where clause: PreparedStatement s=c.prepareStatement("select utctimestamp from t where utctimestamp>=? and utctimestamp<?"); s.setTimestamp(1, new Timestamp(1273017600000L)); //2010-05-05 00:00 GMT s.setTimestamp(2, new Timestamp(1273104000000L)); //2010-05-06 00:00 GMT The result I get is different, when I have different time zones on the client computer. Is this a bug in Oracle jdbc? or correct behavior? The parameter is Timestamp, and I expected that no time conversions will be done on the way. The database column type is DATE, but I also checked it with TIMESTAMP column type with the same results. Is there a way to achieve correct result? I cannot change default timezone in the the whole application to UTC. Thanks for your help

    Read the article

  • PreparedStatement.setString() method without quotes

    - by Slavko
    I'm trying to use a PreparedStatement with code similar to this: SELECT * FROM ? WHERE name = ? Obviously, what happens when I use setString() to set the table and name field is this: SELECT * FROM 'my_table' WHERE name = 'whatever' and the query doesn't work. Is there a way to set the String without quotes so the line looks like this: SELECT * FROM my_table WHERE name = 'whatever' or should I just give it up and use the regular Statement instead (the arguments come from another part of the system, neither of those is entered by a user)?

    Read the article

  • Infinite sharing system (PHP/MySQLi)

    - by Toine Lille
    I'm working on a discount system for whichever customer shares a product and brings in new customers. Each unique visit = $0.05 off, each new customer = $0.50 off (it's a cheap product so yeah, no big numbers). When a new customer shares the site, the customer initially responsible for the new customer (if any) will get half of the new customer's discount as well. The initial customer would get a fourth for the next level and the new customer half of that, etc, creating a tree or pyramid that way that could be infinite. Initial customer ($1.35 discount: 2 new+3 visits + half of 1 new+2 visits) Visitor ($0) Visitor ($0) New customer ($0.60) Visitor ($0) Visitor ($0) Newer customer ($0) New customer ($0) Visitor ($0) The customers are saved along with their IP addresses (bin2hex(inet_pton)) in a database table (customers) with info like a unique id, e-mail address and first date/time the purchased a product (= time of registration). The shares are saved in a separate table within the same database (sharing). Each unique IP addresses that visits the site creates a new row featuring the IP address (also saved as bin2hex(inet_pton)), the id of the customer who shared it and the date/time of the visit. Sharing goes via URL, featuring a GET element containing the customer's id. Visits and new customers overlap, as visits will always occur before the new customer does. That's fine. The date/times are used just to make it a little more secure (I also use the IP along with cookies to see if people cheat the system). If an IP is already in the sharing or customer tables, it does not count and will not create a new entry. Now the problem is, how to make the infinity happen and apply the different values to it? That's all I'd need to know. It needs to calculate the discount for each customer separately, but also allow for monitoring altogether (though that's just a matter of passing all ID's through it). I figured I'd start (after the database connection) with $stmt = $con->prepare('SELECT ip,datetime FROM sharing WHERE sender=?'); $stmt->bind_param('i',$customerid); $stmt->execute(); $stmt->store_result(); $discount = $discount + ($stmt->num_rows * 0.05); $stmt->bind_result($ip,$timeofsharing); to translate all the visits to $0.05 of discount each. To check for the new customers that came from these visits, I wrote the following: while ($sql->fetch()) { $stmt2 = $con->prepare("SELECT datetime FROM users WHERE ip=?"); $stmt2->bind_param('s',$ip); $stmt2->execute(); $stmt2->store_result(); $stmt2->bind_result($timeofpurchase); Followed by a little more security comparing the datetimes: while ($stmt2->fetch()) { if (strtotime($timeofpurchase) < strtotime($timeofsharing)) { $discount = $discount + $0.50; } But this is just for the initial customer's direct results. If I'd want to check for the next level, I'd basically have to put the exact same check and loop in itself, checking each new customer the initial customer they brought to the site, and then for the next level again to check all of the newer customers, etc, etc. What to do? / Where to go? / What would be the correct practice for this? Thanks!

    Read the article

  • PreparedStatement throws MySQLSyntaxErrorException

    - by sqlBugs
    I have the following Java code: String query = "Select 1 from myTable where name = ? and age = ?"; PreparedStatement stmt = conn.prepareStatement(query); stmt.setString(1, name); stmt.setInt(2, age); ResultSet rs = stmt.executeQuery(); Whenever I run the above code, it throws an exception on line 2 (where the PreparedStatement is declared): Error 0221202: SQLException = com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? AND age = ?' at line 1 Does anyone have any suggestions about what I am doing wrong? Thanks!

    Read the article

  • Dynamically Insert Variables into DB Table using PreparedStatement

    - by gran_profaci
    I was working with PreparedStatement today and noticed that it used setString() setTimestamp() etc. to insert variables into the DB. I basically have 20 tables each with at least 15 columns and it would not be feasible for me to manuallt write down all the setters. Considering that I have an ArrayList "Vals" which contains all the variables to be inputted in String format (obtained by getString() using PreparedStatement itself), is there any way I can do an insert without using expressly using the Setters? That would save me a lot of time.

    Read the article

  • INSERT SQL in Java

    - by Pierre
    Hello. I have a Java application and I want to use SQL database. I have a class for my connection : public class SQLConnection{ private static String url = "jdbc:postgresql://localhost:5432/table"; private static String user = "postgres"; private static String passwd = "toto"; private static Connection connect; public static Connection getInstance(){ if(connect == null){ try { connect = DriverManager.getConnection(url, user, passwd); } catch (SQLException e) { JOptionPane.showMessageDialog(null, e.getMessage(), "Connection Error", JOptionPane.ERROR_MESSAGE); } } return connect; } } And now, in another class I succeeded to print my values but when I attempt to insert a value nothing is happening ... Here's my code : try { Statement state = SQLConnection.getInstance().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY); Statement state2 = SQLConnection.getInstance().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_UPDATABLE); state2.executeUpdate("INSERT INTO table(field1) VALUES (\"Value\")"); // Here's my problem ResultSet res = state.executeQuery("SELECT * FROM plateau");

    Read the article

  • Can I get the full query that a PreparedStatement is about to execute?

    - by ufk
    I'm working with PreparedStatement with MySQL server. example: String myQuery = "select id from user where name = ?"; PreparedStatement stmt = sqlConnection.prepareStatement(myQuery); stmt.setString(1, "test"); stmt.executeQUery(); ResultSet rs = stmt.getResultSet(); How can I receive the full SQL query that is about to be executed on the MySQL server? thanks!

    Read the article

  • What's wrong with my using pdo this way?

    - by user198729
    $sql = "SELECT * FROM table ORDER BY :sort :dir LIMIT :start, :results"; $stmt = $dbh->prepare($sql); $stmt->execute(array( 'sort' => $_GET['sort'], 'dir' => $_GET['dir'], 'start' => $_GET['start'], 'results' => $_GET['results'], ) ); I tried to use prepare to do the job,but $stmt->fetchAll(PDO::FETCH_ASSOC); returns nothing.

    Read the article

  • Java PreparedStatement Throws MySQLSyntaxErrorException

    - by sqlBugs
    I have the following Java code: String query = "Select 1 from myTable where name = ? and age = ?"; PreparedStatement stmt = conn.prepareStatement(query); stmt.setString(1, name); stmt.setInt(2, age); ResultSet rs = stmt.executeQuery(); Whenever I run the above code, it throws an exception on line 2 (where the PreparedStatement is declared): Error 0221202: SQLException = com.mysql.jdbc.exceptions.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? AND age = ?' at line 1 Does anyone have any suggestions about what I am doing wrong? Thanks!

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • How to test using conditional defines if the application is Firemonkey one?

    - by Gad D Lord
    I use DUnit. It has an VCL GUITestRunner and a console TextTestRunner. In an unit used by both Firemonkey and VCL Forms applications I would like to achieve the following: If Firemonkey app, if target is OS X, and executing on OS X - TextTestRunner If Firemonkey app, if target is 32-bit Windows, executing on Windows - AllocConsole + TextTestRunner If VCL app - GUITestRunner {$IFDEF MACOS} TextTestRunner.RunRegisteredTests; // Case 1 {$ELSE} {$IFDEF MSWINDOWS} AllocConsole; {$ENDIF} {$IFDEF FIREMONKEY_APP} // Case 2 <--------------- HERE TextTestRunner.RunRegisteredTests; {$ELSE} // Case 3 GUITestRunner.RunRegisteredTests; {$IFEND} {$ENDIF} Which is the best way to make Case 2 work?

    Read the article

  • Most readable way to write simple conditional check

    - by JRL
    What would be the most readable/best way to write a multiple conditional check such as shown below? Two possibilities that I could think of (this is Java but the language really doesn't matter here): Option 1: boolean c1 = passwordField.getPassword().length > 0; boolean c2 = !stationIDTextField.getText().trim().isEmpty(); boolean c3 = !userNameTextField.getText().trim().isEmpty(); if (c1 && c2 && c3) { okButton.setEnabled(true); } Option 2: if (passwordField.getPassword().length > 0 && !stationIDTextField.getText().trim().isEmpty() && !userNameTextField.getText().trim().isEmpty() { okButton.setEnabled(true); } What I don't like about option 2 is that the line wraps and then indentation becomes a pain. What I don't like about option 1 is that it creates variables for nothing and requires looking at two places. So what do you think? Any other options?

    Read the article

  • How to understand "if ( obj.length === +obj.length )" Javascript condition statement?

    - by humanityANDpeace
    I have run across a condition statement which I have some difficulties to understand. It looks like (please note the +-sign on the right-hand-side) this: obj.length === +obj.length. Can this condition and its purpose/syntax be explained? Looking at the statement (without knowing it) provokes the impression that it is a dirty hack of some sort, but I am almost certain that underscore.js is rather a well designed library, so there must be a better explanation. Background I found this statement used in some functions of the underscore.js library (underscore.js annotated source). My guesswork is that this condition statement is somehow related to testing for a variable obj to be of Array type? (but I am totally unsure). I have tried to test this using this code. var myArray = [1,2,3]; testResult1 = myArray.length === +myArray.length; console.log( testResult1 ); //prints true var myObject = { foo : "somestring", bar : 123 }; testResult2 = myObject.length === +myObject.length; console.log( testResult2 ); //prints false

    Read the article

  • Return cell reference as result of if statement with vlookups.

    - by EMJ
    I have two sets of data in excel. One contains a set of data which represents the initial step of a process. The other set of data represents the additional steps which take place after the first step is completed. Each of the data records in the "additional step data" has an id in a column. I need to find the identifying codes of the "additional step data" which correspond with the initial step data records. The problem is that I have to match the data in 4 columns between the two data sets and return the id of the "additional step data". I started by doing a combination of an if and vlookup functions, but I got stuck when I tried to figure out how to get the if statement to reference the id of the matching "additional step data". Basically I am trying to avoid having to search by manually filtering between two sets of data and finding corresponding records. Does anyone have any idea about how to do this?

    Read the article

  • Rails - Create if record doesn't exist or else update.....Whats Best way to do this?

    - by ChrisWesAllen
    Hi, I have a create statement for some models but its creating a record within a join table regardless if the record exist. Here is what my code looks like. @user = User.find(current_user) @event = Event.find(params[:id]) for interest in @event.interests @user.choices.create(:interest => interest, :score => 4) end The problem is it creates records no matter what. I would like it to create a record if it doesnt exist, if a record does exist I would just to it to take the attribute of the found record and add or subtract 1. So, I've been looking around and I see something called find_or_create_by. My question is what happens if it finds? Preferably if it finds,I would like to take the current :score attribute and +1. SO is it possible to find or create by id? I'm not sure what attribute I would find by since the model I'm looking at is a join model which only had id foreign keys and the score attribute. I tried @user.choices.find_or_create_by_user(:user => @user.id, :interest => interest, :score => 4) but got "undefined method `find_by_user'".....ANy ideas or help?

    Read the article

  • The Cash or Credit problem

    - by Josh K
    If you go to a store and ask "Cash or Credit?" they might simply say "Yes." This doesn't tell you anything as you posed an OR statement. if(cash || credit) With humans it's possible that they might respond "Both" to that question, or "Only {cash | credit}." Is there a way (or operator) to force the a statement to return the TRUE portions of a statement? For example: boolean cash = true; boolean credit = true; boolean cheque = false; if(cash || credit || cheque ) { // In here you would have an array with cash and credit in it because both of those are true }

    Read the article

  • switch vs. if...else if...else

    - by John Hartsock
    Guys I have a couple of questions: Is there a preformance difference in Javascript between a switch statement and an if...else if....else? If so why? Is the behavior of switch and if...else if...else different across browsers? (FireFox, IE, Chrome, Opera, Safari) The reason for asking this question is it seems that I get better preformance on a switch statement with approx 100 cases in Firefox. But in IE it get better preformance with 100 if...else if...else.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >