Search Results

Search found 6988 results on 280 pages for 'if statement'.

Page 47/280 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • MySql. What is there error on this insert+select+where statement?

    - by acidzombie24
    What is wrong with this statement? It works with sqlite and MS sql (last time i tested it) I would like a select unless there is no match with name or email. Then i would like to insert. I have it as one statement because its easy to keep concurrent safe as one statement. (its not that complex of a statement). INSERT INTO `user_data` ( `last_login_ip`, `login_date`, ...) SELECT @0, @14 WHERE not exists (SELECT * FROM `user_data` WHERE `name` = @15 OR `unconfirmed_email` = @16 LIMIT 1); Exception You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE not exists (SELECT * FROM `user_data` WHERE `name` = 'thename' OR `unconfirmed' at line 31

    Read the article

  • PDOStatement::bindParam() not setting AI value from MySQL insert?

    - by Alan
    I have a simple insert statement using PDO php class. $statement = $db->prepare('INSERT INTO demographics (id,location_id,male,ethnicity_id,birthyear) VALUES (:id,:location_id,:male,:ethnicity_id,:birthyear)'); $statement->bindParam(':id',$demo->id,PDO::PARAM_INT,4); $statement->bindParam(':location_id', $demo->locationid,PDO::PARAM_INT); $statement->bindParam(':male',$demo->male,PDO::PARAM_BOOL); $statement->bindParam(':ethnicity_id',$demo->ethnicityid,PDO::PARAM_INT); $statement->bindParam(':birthyear',$demo->birthyear,PDO::PARAM_INT); $statement->execute(); print_r($demo); Even though the statement executes correctly (row is correctly written), $demo-id is null. Any thoughts?

    Read the article

  • Do we still have a case against the goto statement? [closed]

    - by FredOverflow
    Possible Duplicate: Is it ever worthwhile using goto? In a recent article, Andrew Koenig writes: When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments. He then shows two program fragments, one without a goto and and one with a goto: if (n < 0) n = 0; Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative. Suppose we rewrite this fragment: if (n >= 0) goto nonneg; n = 0; nonneg: ; In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. I emphasized the part that I don't agree with. Modern languages like C++ do not allow goto to transfer control arbitrarily. Here are two examples: You cannot jump to a label that is defined in a different function. You cannot jump over a variable initialization. Now consider composing your code of tiny functions that adhere to the single responsibility principle: int clamp_to_zero(int n) { if (n >= 0) goto n_is_not_negative: n = 0; n_is_not_negative: return n; } The classic argument against the goto statement is that control could have transferred from anywhere inside your program to the label n_is_not_negative, but this simply is not (and was never) true in C++. If you try it, you will get a compiler error, because labels are scoped. The rest of the program doesn't even see the name n_is_not_negative, so it's just not possible to jump there. This is a static guarantee! Now, I'm not saying that this version is better then the one without the goto, but to make the latter as expressive as the first one, we would at least have to insert a comment, or even better yet, an assertion: int clamp_to_zero(int n) { if (n < 0) n = 0; // n is not negative at this point assert(n >= 0); return n; } Note that you basically get the assertion for free in the goto version, because the condition n >= 0 is already written in line 1, and n = 0; satisfies the condition trivially. But that's just a random observation. It seems to me that "don't use gotos!" is one of those dogmas like "don't use multiple returns!" that stem from a time where the real problem were functions of hundreds or even thousand of lines of code. So, do we still have a case against the goto statement, other than that it is not particularly useful? I haven't written a goto in at least a decade, but it's not like I was running away in terror whenever I encountered one. 1 Ideally, I would like to see a strong and valid argument against gotos that still holds when you adhere to established programming principles for clean code like the SRP. "You can jump anywhere" is not (and has never been) a valid argument in C++, and somehow I don't like teaching stuff that is not true. 1: Also, I have never been able to resurrect even a single velociraptor, no matter how many gotos I tried :(

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • SQLITE crash when no data present in table

    - by johnblack45
    Hey, Im having a problem with my app that causes it to crash when no data is present in the table when using a table view. I have tested my code and it works fine as long as there is data present but i need it to work when there is no data present. -(void)initialiseTableData { NSMutableArray *array = [[NSMutableArray alloc]init]; sqlite3 *db = [iCaddyAppDelegate getNewDBConnection]; sqlite3_stmt *statement; const char *sql = "select courseId, courseName, totalPar, totalyardage, holePars, holeYardages, holeStrokeIndexs from Course"; if(sqlite3_prepare_v2(db, sql, -1, &statement, NULL)!= SQLITE_OK) { NSAssert1(0,@"Error preparing statement",sqlite3_errmsg(db)); sqlite3_close(db); } else { while (sqlite3_step(statement) == SQLITE_ROW) { Course *temp = [[Course alloc]init]; temp.courseId = sqlite3_column_int(statement,0); temp.courseName = [NSString stringWithFormat:@"%s",(char*)sqlite3_column_text(statement,1)]; temp.totalPar =sqlite3_column_int(statement,2); temp.totalYardage =sqlite3_column_int(statement,3); NSString tempHolePars = [NSString stringWithFormat:@"%s",(char)sqlite3_column_text(statement,4)]; NSString tempHoleYardages = [NSString stringWithFormat:@"%s",(char)sqlite3_column_text(statement,5)]; NSString tempHoleStrokeIndexes = [NSString stringWithFormat:@"%s",(char)sqlite3_column_text(statement,6)]; NSArray *temp1 = [tempHolePars componentsSeparatedByString:@":"]; NSArray *temp2 = [tempHoleYardages componentsSeparatedByString:@":"]; NSArray *temp3 = [tempHoleStrokeIndexes componentsSeparatedByString:@":"]; for(int i = 0; i<=17; i++) { NSString *temp1String = [temp1 objectAtIndex:i]; [temp.holePars insertObject:temp1String atIndex:i]; NSString *temp2String = [temp2 objectAtIndex:i]; [temp.holeYardages insertObject:temp2String atIndex:i]; NSString *temp3String = [temp3 objectAtIndex:i]; [temp.holeStrokeIndexes insertObject:temp3String atIndex:i]; } [array addObject:temp]; } self.list = array; [self.table reloadData]; } }

    Read the article

  • How to avoid or minimise use of check/conditional statement in my scenario?

    - by Muneeb Nasir
    I have scenario, where I got stream and I need to check for some value. If I got any new value I have to store it in any of data structure. It seems very easy, I can place conditional statement if-else or can use contain method of set/map to check either received is new or not. But the problem is checking will effect my application performance, in stream I will receive hundreds for value in second, if I start checking each and every value I received then for sure it effect performance. Anybody can suggest me any mechanism or algorithm to solve my issue, either by bypassing checks or at least minimize them?

    Read the article

  • How do I access column data in a previous select statement from a sub-query? [closed]

    - by payling
    PROBLEM How do I access column data in a previous select statement from a sub-query? Below is a simple mock up of what I'm attempting to do. Tables used: Quotes, Users QUOTES TABLE qid, (quote id) owner_uid, creator_uid SQL SYNTAX: SELECT q.qid, q.owner_uid, q.creator_uid, owner.fname, owner.lname FROM quotes q, (SELECT u.fname, u.lname FROM users u WHERE u.uid = q.owner_uid) AS owner WHERE q.qid = '#' SUMMARY I want to be able to use the quote table's owner_uid and specify it for the owner table so I can return all the owner info for that particular quote. The problem is, q.owner_uid is not recognized in the owner sub-query. What am I doing wrong?

    Read the article

  • how to avoid or minimise use of check/conditional statement?

    - by Muneeb Nasir
    I have scenario, where i got stream and i need to check for some value. if i got my any new value i have to store it in any of data structure. well it seems very easy, i can place conditional statement if-else or can use contain method of set/map to check either received is new or not. but the problem is checking will effect my application performance, in stream i'll receive hundreds for value in second, if i start checking each and every value i received than for sure it effect performance. Any body can suggest me any mechanism or algorithm that solve my issue. either by bypassing checks or atleast minimize them?

    Read the article

  • Is it better to use preprocessor directive or if(constant) statement?

    - by MByD
    Let's say we have a codebase that is used for many different costumer, and we have some code in it that relevant only for costumers of type X. Is it better to use preprocessor directives to include this code only in costumer of type X, or to use if statement, to be more clear: // some code #if TYPE_X_COSTUMER = 1 // do some things #endif // rest of the code or if(TYPE_X_COSTUMER) { // do some things } The arguments I can think about are: Preprocessor directive results in smaller code footprint and less branches (on non-optimizing compilers) If statements results with code that always compiles, e.g. if someone will make a mistake that will harm the irrelevant code for the project he works on, the error will still appear, and he will not corrupt the code base. Otherwise he will not be aware of the corruption. I was always been told to prefer the usage of the processor over the usage of the preprocessor (If this is an argument at all...) What is preferable - when talking about a code base for many different costumers?

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • How to understand "if ( obj.length === +obj.length )" Javascript condition statement?

    - by humanityANDpeace
    I have run across a condition statement which I have some difficulties to understand. It looks like (please note the +-sign on the right-hand-side) this: obj.length === +obj.length. Can this condition and its purpose/syntax be explained? Looking at the statement (without knowing it) provokes the impression that it is a dirty hack of some sort, but I am almost certain that underscore.js is rather a well designed library, so there must be a better explanation. Background I found this statement used in some functions of the underscore.js library (underscore.js annotated source). My guesswork is that this condition statement is somehow related to testing for a variable obj to be of Array type? (but I am totally unsure). I have tried to test this using this code. var myArray = [1,2,3]; testResult1 = myArray.length === +myArray.length; console.log( testResult1 ); //prints true var myObject = { foo : "somestring", bar : 123 }; testResult2 = myObject.length === +myObject.length; console.log( testResult2 ); //prints false

    Read the article

  • BYOD is not a fashion statement; it’s an architectural shift - by Indus Khaitan

    - by Greg Jensen
    Ten years ago, if you asked a CIO, “how mobile is your enterprise?”. The answer would be, “100%, we give Blackberry to all our employees.”Few things have changed since then: 1.    Smartphone form-factors have matured, especially after the launch of iPhone. 2.    Rapid growth of productivity applications and services that enable creation and consumption of digital content 3.    Pervasive mobile data connectivityThere are two threads emerging from the change. Users are rapidly mingling their personas of an individual as well as an employee. In the first second, posting a picture of a fancy dinner on Facebook, to creating an expense report for the same meal on the mobile device. Irrespective of the dual persona, a user’s personal and corporate lives intermingle freely on a single hardware and more often than not, it’s an employees personal smartphone being used for everything. A BYOD program enables IT to “control” an employee owned device, while enabling productivity. More often than not the objective of BYOD programs are financial; instead of the organization, an employee pays for it.  More than a fancy device, BYOD initiatives have become sort of fashion statement, of corporate productivity, of letting employees be in-charge and a show of corporate empathy to not force an archaic form-factor in a world of new device launches every month. BYOD is no longer a means of effectively moving expense dollars and support costs. It does not matter who owns the device, it has to be protected.  BYOD brings an architectural shift.  BYOD is an architecture, which assumes that every device is vulnerable, not just what your employees have brought but what organizations have purchased for their employees. It's an architecture, which forces us to rethink how to provide productivity without comprising security.Why assume that every device is vulnerable? Mobile operating systems are rapidly evolving with leading upgrade announcement every other month. It is impossible for IT to catch-up. More than that, user’s are savvier than earlier.  While IT could install locks at the doors to prevent intruders, it may degrade productivity—which incentivizes user’s to bypass restrictions. A rapidly evolving mobile ecosystem have moving parts which are vulnerable. Hence, creating a mobile security platform, which uses the fundamental blocks of BYOD architecture such as identity defragmentation, IT control and data isolation, ensures that the sprawl of corporate data is contained. In the next post, we’ll dig deeper into the BYOD architecture. Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • Why does the return statement not print anything to the console?

    - by dyoverdx
    I Googled it and I didn't hit anything useful, so I decided to ask on here. I can't use System.out.println for the project that I am working on, so I used the return statement. Everything compiles just fine, but my return statement doesn't print anything to the console, the program just terminates. All I have in the code is just an if-else statement that returns true or false. Why don't I see anything on the console? I am using Eclipse Juno's Console by the way.

    Read the article

  • Batch processing JDBC

    - by Wai Hein
    I am practicing JDBC batch processing and having errors: error 1: Unsupported feature error 2: Execute cannot be empty or null Property files include: itemsdao.updateBookName = Update Books set bookname = ? where books.id = ? itemsdao.updateAuthorName = Update books set authorname = ? where books.id = ? I know I can execute about DML statements in one update, but I am practicing batch processing in JDBC. Below is my method public void update(Item item) { String query = null; try { connection = DbConnector.getConnection(); property = SqlPropertiesLoader.getProperties("dml.properties"); connection.setAutoCommit(false); if ( property == null ) { Logging.log.debug("dml.properties does not exist. Check property loader or file name is spelled right"); return; } query = property.getProperty("itemsdao.updateBookName"); statement = connection.prepareStatement(query); statement.setString(1, item.getBookName()); statement.setInt(2, item.getId()); statement.addBatch(query); query = property.getProperty("itemsdao.updateAuthorName"); statement = connection.prepareStatement(query); statement.setString(1, item.getAuthorName()); statement.setInt(2, item.getId()); statement.addBatch(query); statement.executeBatch(); connection.commit(); }catch (ClassNotFoundException e) { Logging.log.error("Connection class does not exist", e); } catch (SQLException e) { Logging.log.error("Violating PK constraint",e); } //helper class th finally { DbUtil.close(connection); DbUtil.closePreparedStatement(statement); }

    Read the article

  • Selectively parsing log files using Java

    - by GPX
    I have to parse a big bunch of log files, which are in the following format. SOME SQL STATEMENT/QUERY DB20000I The SQL command completed successfully. SOME OTHER SQL STATEMENT/QUERY DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. EDIT 1: The first 3 lines (including a blank line) indicate an SQL statement executed successfully, while the next three show the statement and the exception it caused. darioo's reply below, suggesting the use of grep instead of Java, works beautifully for a single line SQL statement. EDIT 2: However, the SQL statement/query might not be a single line, necessarily. Sometimes it is a big CREATE PROCEDURE...END PROCEDURE block. Can this problem be overcome using only Unix commands too? Now I need to parse through the entire log file and pick all occurrences of the pair of (SQL statement + error) and write them in a separate file. Please show me how to do this!

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Using the using statement with WinForms... Good Practice?

    - by Nate Heinrich
    I understand the concept and reasons behind using the using statement, and I use it with things like file resources and remote connections, I was wondering if it is good practice to use the using statement with WinForm forms and dialogs? using (MyDialog dlg = new MyDialog()) { if (dlg.ShowDialog() == EDialogResult.OK) { // Do Something } } Thanks!

    Read the article

  • Does the order of arguments in a conditional statement affect execution time in php?

    - by dd0x
    Hi, As far as I know when writing a conditional statement in C such as the following: if ( some_function() == 100 && my_var == 5 ) { //do something } is slower to execute than if ( my_var == 5 && some_function() == 100 ) { //do something } because it's faster to execute the my_var == 5 rather than all of the code in the function ( because if my_var != 5, then the rest of the if statement would not even be executed )...so I am wondering if the same is true for conditional statements in PHP?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >