Search Results

Search found 7596 results on 304 pages for 'prepared statement'.

Page 61/304 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Make these 2 sql statements into one with the same functionality

    - by Phil
    I have sql statement one; select linkscat.id, linkscat.category from linkscat, contentlinks, links where contentlinks.linksid = links.id and contentlinks.contentid = @contentid and links.linkscatid = linkscat.id order by linkscat.category and sql statement 2 takes a parameter called @linkscat which is 'id' from the statement above; select * from links where linkscatid= @linkscat I'm running into all types of trouble trying to use many sqldatareaders, nested repeaters etc, but it would be great if all the work could be done in the one statement? Is this possible and if so please can you help by posting the final statement? Thanks greatly, any help much appreciated!

    Read the article

  • What's the most efficient way to repeatedly remove leading text using Vim?

    - by John Topley
    What's the most efficient way to remove the text 2010-04-07 14:25:50,773 DEBUG This is a debug log statement - from a log file like the extract below using Vim? 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 9,8 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 1,11 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 5,2 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 8,4 This is what the result should look like: 9,8 1,11 5,2 8,4 Note that on this occasion I'm using gVim on Windows, so please don't suggest any UNIX programs which may be better suited to the task—I have to do it using Vim.

    Read the article

  • SQL Server 2008 database timeout after delete

    - by stephenbayer
    I'm running the following statement, it is working locally with SQL Server 2008, however, there is SQL Server 2008 Express on the development server, and after the sql statement runs, I am unable to do SELECT statements on the table in which I deleted the record. Both databases were created with the same table creation scripts. "DELETE FROM [dbo].[tblMiddayMover] WITH (ROWLOCK) WHERE [idMiddayMover] = @IdMiddayMover" What reasons would this statement ever cause the database to hang. After executing that statement, the following SELECT statement causes an error. "SELECT * FROM [dbo].[tblMiddayMover] WHERE [fldActive] = 1" I get the following error: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. I can do select statements on any other table with no issues.

    Read the article

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • How can I encode four unsigned bytes (0-255) to a float and back again using HLSL?

    - by Statement
    Hello! I am facing a task where one of my hlsl shaders require multiple texture lookups per pixel. My 2d textures are fixed to 256*256, so two bytes should be sufficient to address any given texel given this constraint. My idea is then to put two xy-coordinates in each float, giving me eight xy-coordinates in pixel space when packed in a Vector4 format image. These eight coordinates are then used to sample another texture(s). The reason for doing this is to save graphics memory and an attempt to optimize processing time, since then I don't require multiple texture lookups. By the way: Does anyone know if encoding/decoding 16 bytes from/to 4 floats using 1 sampling is slower than 4 samplings with unencoded data?

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • How can I make nested string splits?

    - by Statement
    I have what seemed at first to be a trivial problem but turned out to become something I can't figure out how to easily solve. I need to be able to store lists of items in a string. Then those items in turn can be a list, or some other value that may contain my separator character. I have two different methods that unpack the two different cases but I realized I need to encode the contained value from any separator characters used with string.Split. To illustrate the problem: string[] nested = { "mary;john;carl", "dog;cat;fish", "plainValue" } string list = string.Join(";", nested); string[] unnested = list.Split(';'); // EEK! returns 7 items, expected 3! This would produce a list "mary;john;carl;dog;cat;fish;plainValue", a value I can't split to get the three original nested strings from. Indeed, instead of the three original strings, I'd get 7 strings on split and this approach thus doesn't work at all. What I want is to allow the values in my string to be encoded so I can unpack/split the contents just the way before I packed/join them. I assume I might need to go away from string.Split and string.Join and that is perfectly fine. I might just have overlooked some useful class or method. How can I allow any string values to be packed / unpacked into lists? I prefer neat, simple solutions over bulky if possible. For the curious mind, I am making extensions for PlayerPrefs in Unity3D, and I can only work with ints, floats and strings. Thus I chose strings to be my data carrier. This is why I am making this nested list of strings.

    Read the article

  • How can I write fast colored output to Console?

    - by Statement
    Hello world! I want to learn if there is another (faster) way to output text to the console application window using C# .net than with the simple Write, BackgroundColor and ForegroundColor methods and properties? I learned that each cell has a background color and a foreground color, and I would like to cache/buffer/write faster than using the mentioned methods. Maybe there is some help using the Out buffer, but I don't know how to encode the colors into the stream, if that is where the color data resides. This is for a retrostyle textbased game I am wanting to implement where I make use of the standard colors and ascii characters for laying out the game. Please help :)

    Read the article

  • Oracle XMLDB's XMLCAST and XMLQUERY incompatible with iBatis?

    - by tthong
    I've been trying to select a list of values from XMLs stored in an XMLType column but I keep getting the errors which are listed at the tail end of this post. The select id is getXMLFragment , and the relevant subset of the sqlmap.xml is as follows: <select id="getXMLFragment" resultClass="list"> SELECT XMLCAST(XMLQUERY('$CUSTOMER/CUSTOMER/DETAILS/ CUST_NAME/text()' PASSING CUSTOMER AS "CUSTOMER" RETURNING CONTENT) AS VARCHAR2(20)) AS customers FROM SHOP.CLIENT_INFO </select> (CUSTOMER is an XMLType column in CLIENT_INFO) and I call the statement using List<String> custNames= (List<String>) sqlMap.queryForList("getXMLFragment"); I am using ibatis-2.3.4.726.jar. Is it because iBatis does not recognise XMLDB queries and hence, tokenizes the string wrongly? On a sidenote, I have implemented XMLTypeCallback.java to handle XMLType insertions successfully, and I think it will work should I wish to retrieve the entire XML. However, in this case, I need to extract only individual values due to requirements. A workaround would be greatly appreciated. Thanks in advance. The exceptions generated are listed below: --- The error occurred in sqlMap.xml. --- The error occurred while preparing the mapped statement for execution. --- Check the getXMLFragment. --- Check the SQL statement. --- Cause: java.util.NoSuchElementException at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java: 204) at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForList(MappedStatement.java: 139) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java: 567) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java: 541) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java: 118) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java: 122) at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryForList(SqlMapClientImpl.java: 98) at Main.main(Main.java:60) Caused by: java.util.NoSuchElementException at java.util.StringTokenizer.nextToken(StringTokenizer.java:332) at com.ibatis.sqlmap.engine.mapping.sql.simple.SimpleDynamicSql.processDynamicElements(SimpleDynamicSql.java: 90) at com.ibatis.sqlmap.engine.mapping.sql.simple.SimpleDynamicSql.getSql(SimpleDynamicSql.java: 45) at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java: 184) ... 7 more

    Read the article

  • Using Oracle hint "FIRST_ROWS" to improve Oracle database performances

    - by bobetko
    I have a statement that runs on Oracle database server. The statement has about 5 joins and there is nothing unusual there. It looks pretty much like below: SELECT field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 The problem (and what is interesting) is that statement for userid = 1 takes 1 second to return 590 records. Statement for userid = 2 takes around 30 seconds to return 70 records. I don't understand why is difference so big. It seems that different execution plan is chosen for statement with userid = 1 and different for userid = 2. After I implemented Oracle Hint FIRST_ROW, performance become significantly better. Both statements (for both ids 1 and 2) produce return in under 1 second. SELECT /*+ FIRST_ROWS */ field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 Questions: 1) What are possible reasons for bad performance when userid = 2 (when hint is not used)? 2) Why would execution plan be different for one vs another statement (when hint is not used)? 3) Is there anything that I should be careful about when deciding to add this hint to my queries? Thanks

    Read the article

  • Boost Spirit and Lex parser problem

    - by bpw1621
    I've been struggling to try and (incrementally) modify example code from the documentation but with not much different I am not getting the behavior I expect. Specifically, the "if" statement fails when (my intent is that) it should be passing (there was an "else" but that part of the parser was removed during debugging). The assignment statement works fine. I had a "while" statement as well which had the same problem as the "if" statement so I am sure if I can get help to figure out why one is not working it should be easy to get the other going. It must be kind of subtle because this is almost verbatim what is in one of the examples. #include <iostream> #include <fstream> #include <string> #define BOOST_SPIRIT_DEBUG #include <boost/config/warning_disable.hpp> #include <boost/spirit/include/qi.hpp> #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/spirit/include/phoenix_operator.hpp> #include <boost/spirit/include/phoenix_statement.hpp> #include <boost/spirit/include/phoenix_container.hpp> namespace qi = boost::spirit::qi; namespace lex = boost::spirit::lex; inline std::string read_from_file( const char* infile ) { std::ifstream instream( infile ); if( !instream.is_open() ) { std::cerr << "Could not open file: \"" << infile << "\"" << std::endl; exit( -1 ); } instream.unsetf( std::ios::skipws ); return( std::string( std::istreambuf_iterator< char >( instream.rdbuf() ), std::istreambuf_iterator< char >() ) ); } template< typename Lexer > struct LangLexer : lex::lexer< Lexer > { LangLexer() { identifier = "[a-zA-Z][a-zA-Z0-9_]*"; number = "[-+]?(\\d*\\.)?\\d+([eE][-+]?\\d+)?"; if_ = "if"; else_ = "else"; this->self = lex::token_def<> ( '(' ) | ')' | '{' | '}' | '=' | ';'; this->self += identifier | number | if_ | else_; this->self( "WS" ) = lex::token_def<>( "[ \\t\\n]+" ); } lex::token_def<> if_, else_; lex::token_def< std::string > identifier; lex::token_def< double > number; }; template< typename Iterator, typename Lexer > struct LangGrammar : qi::grammar< Iterator, qi::in_state_skipper< Lexer > > { template< typename TokenDef > LangGrammar( const TokenDef& tok ) : LangGrammar::base_type( program ) { using boost::phoenix::val; using boost::phoenix::ref; using boost::phoenix::size; program = +block; block = '{' >> *statement >> '}'; statement = assignment | if_stmt; assignment = ( tok.identifier >> '=' >> expression >> ';' ); if_stmt = ( tok.if_ >> '(' >> expression >> ')' >> block ); expression = ( tok.identifier[ qi::_val = qi::_1 ] | tok.number[ qi::_val = qi::_1 ] ); BOOST_SPIRIT_DEBUG_NODE( program ); BOOST_SPIRIT_DEBUG_NODE( block ); BOOST_SPIRIT_DEBUG_NODE( statement ); BOOST_SPIRIT_DEBUG_NODE( assignment ); BOOST_SPIRIT_DEBUG_NODE( if_stmt ); BOOST_SPIRIT_DEBUG_NODE( expression ); } qi::rule< Iterator, qi::in_state_skipper< Lexer > > program, block, statement; qi::rule< Iterator, qi::in_state_skipper< Lexer > > assignment, if_stmt; typedef boost::variant< double, std::string > expression_type; qi::rule< Iterator, expression_type(), qi::in_state_skipper< Lexer > > expression; }; int main( int argc, char** argv ) { typedef std::string::iterator base_iterator_type; typedef lex::lexertl::token< base_iterator_type, boost::mpl::vector< double, std::string > > token_type; typedef lex::lexertl::lexer< token_type > lexer_type; typedef LangLexer< lexer_type > LangLexer; typedef LangLexer::iterator_type iterator_type; typedef LangGrammar< iterator_type, LangLexer::lexer_def > LangGrammar; LangLexer lexer; LangGrammar grammar( lexer ); std::string str( read_from_file( 1 == argc ? "boostLexTest.dat" : argv[1] ) ); base_iterator_type strBegin = str.begin(); iterator_type tokenItor = lexer.begin( strBegin, str.end() ); iterator_type tokenItorEnd = lexer.end(); std::cout << std::setfill( '*' ) << std::setw(20) << '*' << std::endl << str << std::endl << std::setfill( '*' ) << std::setw(20) << '*' << std::endl; bool result = qi::phrase_parse( tokenItor, tokenItorEnd, grammar, qi::in_state( "WS" )[ lexer.self ] ); if( result ) { std::cout << "Parsing successful" << std::endl; } else { std::cout << "Parsing error" << std::endl; } return( 0 ); } Here is the output of running this (the file read into the string is dumped out first in main) ******************** { a = 5; if( a ){ b = 2; } } ******************** <program> <try>{</try> <block> <try>{</try> <statement> <try></try> <assignment> <try></try> <expression> <try></try> <success>;</success> <attributes>(5)</attributes> </expression> <success></success> <attributes>()</attributes> </assignment> <success></success> <attributes>()</attributes> </statement> <statement> <try></try> <assignment> <try></try> <fail/> </assignment> <if_stmt> <try> if(</try> <fail/> </if_stmt> <fail/> </statement> <fail/> </block> <fail/> </program> Parsing error

    Read the article

  • Post method in Servlet is not being called again after being executed once

    - by SaurabhCsIITKgp
    I am implementing a database based web application using servlets. Now, when I input a parameter using a form in the jsp page, it redirects it to a servlet which subsequently adds the value to the database (the servlet creates a new table if the table doesn't already exist). The creation of the table and the addition of value works fine if the table doesn't already exists. Once it is created however and the parameter is inputted again in the form, the submit button no longer redirects it to the servlet. Nor is the value added to the database. Kindly advise me as to where I am going wrong. Following are the snippets of my code: From the JSP page (/showmanager is the urlpattern of the servlet): <form action="showmanager" method="post"> <h3>Enter name of the show: </h3> <input type="text" name="showname" value=""> <input type="hidden" name="task" value="addshow" /> <input type="button" value="Add Show"> </form> From the servlet (POST method): p rotected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if(request.getParameter("task").equals("addshow")){ this.addShow(request.getParameter("showname")); response.sendRedirect("showmanager.jsp"); } } Method to add in database: protected boolean addShow(String showname){ try{ statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } } catch(Exception e) { try{ statement =con.prepareStatement("create table showdb10 (id int NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, date varchar(20),time varchar(20), b_total int, o_total int, b_avbl int, o_avbl int, b_price double(10,2), o_price double(10,2), seat_no varchar(20), transaction_id varchar(255), total_sales double(10,2), paymnt_artists double(10,2), paymnt_othr double(10,2), flag varchar(20), PRIMARY KEY(id))"); statement.executeUpdate(); statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } }catch(Exception e2){} } return false; }

    Read the article

  • PHP curl timing mismatch

    - by JonoB
    I am running a php script that: queries a local database to retrieve an amount executes a curl statement to update an external database with the above amount + x queries the local database again to insert a new row reflecting that the curl statement has been executed. One of the problems that I am having is that the curl statement takes 2-4 seconds to execute, so I have two different users from the same company running the same script at the same time, the execution time of the curl command can cause a mismatch in what should be updated in the external database. This is the because the curl statement has not yet returned from the first user...so the second user is working off incorrect figures. I am not sure of the best options here, but basically I need to prevent two or more curl statements being run at the same time. I thought of storing a value in the database that indicates that the curl statement is being executed at that time, and prevent any other curl statements being run until its completed. Once the first curl statement has been executed, then the database flag is updated and the next one can run. If this field is 'locked', then I could loop through the code and sleep for (5) seconds, and then check again if the flag has been reset. If after (3) loops, then reset the flag automatically (i've never seen the curl take longer than 5 seconds) and continue processing. Are there any other (more elegant) ways of approaching this?

    Read the article

  • How to connect SQLite with Java?

    - by Rajapandian
    Hi, I am using one simple code to access the SQLite database from Java application . My code is import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.Statement; public class ConnectSQLite { public static void main(String[] args) { Connection connection = null; ResultSet resultSet = null; Statement statement = null; try { Class.forName("org.sqlite.JDBC"); connection = DriverManager.getConnection("jdbc:sqlite:D:\\testdb.db"); statement = connection.createStatement(); resultSet = statement .executeQuery("SELECT EMPNAME FROM EMPLOYEEDETAILS"); while (resultSet.next()) { System.out.println("EMPLOYEE NAME:" + resultSet.getString("EMPNAME")); } } catch (Exception e) { e.printStackTrace(); } finally { try { resultSet.close(); statement.close(); connection.close(); } catch (Exception e) { e.printStackTrace(); } } } } But this code gives one exception like java.lang.ClassNotFoundException: org.sqlite.JDBC Any body knows how to slove it,please help me.

    Read the article

  • Memory Leaks in pickerView using sqlite

    - by Danamo
    I'm adding self.notes array to a pickerView. This is how I'm setting the array: NSMutableArray *notesArray = [[NSMutableArray alloc] init]; [notesArray addObject:@"-"]; [notesArray addObjectsFromArray:[dbManager getTableValues:@"Notes"]]; self.notes = notesArray; [notesArray release]; The info for the pickerView is taken from the database in this method: -(NSMutableArray *)getTableValues:(NSString *)table { NSMutableArray *valuesArray = [[NSMutableArray alloc] init]; if (sqlite3_open([self.databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); } else { NSString *query = [[NSString alloc] initWithFormat:@"SELECT value FROM %@", table]; sqlite3_stmt *statement; if (sqlite3_prepare_v2(database, [query UTF8String], -1, &statement, nil) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { NSString *value =[NSString stringWithUTF8String:(char *)sqlite3_column_text(statement, 0)]; [valuesArray addObject:value]; [value release]; } sqlite3_reset(statement); } [query release]; sqlite3_finalize(statement); sqlite3_close(database); } return valuesArray; } But I keep getting memory leaks in Instruments for these lines: NSMutableArray *valuesArray = [[NSMutableArray alloc] init]; and [valuesArray addObject:value]; What am I doing wrong here? Thanks for your help!

    Read the article

  • Difference between EJB Persist & Merge operation

    - by shantala.sankeshwar
    This article gives the difference between EJB Persist & Merge operations with scenarios.Use Case Description Users working on EJB persist & merge operations often have this question in mind " When merge can create new entity as well as modify existing entity,then why do we have 2 separate operations - persist & merge?" The reason is very simple.If we use merge operation to create new entity & if the entity exists then it does not throw any exception,but persist throws exception if the entity already exists.Merge should be used to modify the existing entity.The sql statement that gets executed on persist operation is insert statement.But in case of merge first select statement gets executed & then update sql statement gets executed.Scenario 1: Persist operation to create new Emp recordLet us suppose that we have a Java EE Web Application created with Entities from Emp table & have created session bean with data control. Drop Emp Object(Expand SessionEJBLocal->Constructors under Data Controls) as ADF Parameter form in jspx pageDrop persistEmp(Emp) as ADF CommandButton & provide #{bindings.EmpIterator.currentRow.dataProvider} as the value for emp parameter.Then run this page & provide values for Emp,click on 'persistEmp' button.New Emp record gets created.So when we execute persist operation only insert sql statement gets executed :INSERT INTO EMP (EMPNO, COMM, HIREDATE, ENAME, JOB, DEPTNO, SAL, MGR) VALUES (?, ?, ?, ?, ?, ?, ?, ?)    bind => [2, null, null, e2, null, 10, null, null]Scenario 2: Merge operation to modify existing Emp recordLet us suppose that we have a Java EE Web Application created with Entities from Emp table & have created session bean with data control.Drop empFindAll() Object as ADF form on jspx page.Drop mergeEmp(Emp) operation as commandButton & provide #{bindings.EmpIterator.currentRow.dataProvider} as the value for emp parameter.Then run this page & modify values for Emp record,click on 'mergeEmp' button.The respective Emp record gets modified.So when we execute merge operation select & update sql statements gets executed :SELECT EMPNO, COMM, HIREDATE, ENAME, JOB, DEPTNO, SAL, MGR FROM EMP WHERE (EMPNO = ?) bind => [7566]UPDATE EMP SET ENAME = ? WHERE (EMPNO = ?) bind => [KINGS, 7839]

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Tricky situation with EF and Include while projecting (Select / SelectMany)

    - by Vincent Grondin
    Originally posted on: http://geekswithblogs.net/vincentgrondin/archive/2014/06/07/tricky-situation-with-ef-and-include-while-projecting-select.aspxHello, the other day I stumbled on a problem I had a while back with EF and Include method calls and decided this was it and I was going to blog about it…  This would sort of to pin it inside my head and maybe help others too !  So I was using DBContext and wanted to query a DBSet and include some of it’s associations and in the end, do a projection to get a list of Ids…   At first it seems easy…  Query your DBSet, call Include right afterward, then code the rest of your statement with the appropriate where clause and then, do the projection…   Well it wasn’t that easy as my query required I code my where on some entities a few degree further in the association chain and most of these links where “Many”…  I had to do my projection right away with the SelectMany method.  So I did my stuff and tested the query….  no association where loaded…  My Include statement was simply ignored !  Then I remembered this behavior and how to get it to work…  You need to move the Include AFTER your first projection (Select or SelectMany).  So my sequence became:   Query the DBSet, do the projection with SelectMany, Include the associations, code the where clause and do the final projection…. but it wouldn’t compile…   It kept saying that it could not find an “Include” method on an IQueryable… which is perfectly true!  I knew this should work so I went to the definition of the DBset and saw it inherited DBQuery and sure enough the include method was there…  So I had to cast my statement from start until the end of the first projection in a DBQuery then do the Includes and then the rest of my query….   Bottom line is, whenever your Include statement seem to be ignored, then maybe you will need to move them further down in your query and cast your statement in whatever class gives you access to the Include…   Happy coding all !

    Read the article

  • SQLite file locking and DropBox

    - by Alex Jenter
    I'm developing an app in Visual C++ that uses an SQLite3 DB for storing data. Usually it sits in the tray most of the time. I also would like to enable putting my app in a DropBox folder to share it across several PCs. It worked really well up until DropBox has recently updated itself. And now it says that it "can't sync the file in use". The SQLite file is open in my app, but the lock is shared. There are some prepared statements, but all are reset immediately after using step. Is there any way to enable synchronizing of an open SQLite database file? Thanks! Here is the simple wrapper that I use just for testing (no error handling), in case this helps: class Statement { private: Statement(sqlite3* db, const std::wstring& sql) : db(db) { sqlite3_prepare16_v2(db, sql.c_str(), sql.length() * sizeof(wchar_t), &stmt, NULL); } public: ~Statement() { sqlite3_finalize(stmt); } public: void reset() { sqlite3_reset(stmt); } int step() { return sqlite3_step(stmt); } int getInt(int i) const { return sqlite3_column_int(stmt, i); } tstring getText(int i) const { const wchar_t* v = (const wchar_t*)sqlite3_column_text16(stmt, i); int sz = sqlite3_column_bytes16(stmt, i) / sizeof(wchar_t); return std::wstring(v, v + sz); } private: friend class Database; sqlite3* db; sqlite3_stmt* stmt; }; class Database { public: Database(const std::wstring& filename = L"")) : db(NULL) { sqlite3_open16(filename.c_str(), &db); } ~Database() { sqlite3_close(db); } void exec(const std::wstring& sql) { auto_ptr<Statement> st(prepare(sql)); st->step(); } auto_ptr<Statement> prepare(const tstring& sql) const { return auto_ptr<Statement>(new Statement(db, sql)); } private: sqlite3* db; };

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • How to create a shared lock blocking an intent exclusive lock

    - by FremenFreedom
    As I understand it, a SELECT statement will place a shared lock on the rows that it will return. While that SELECT is running, if an UPDATE statement comes along and needs to grab an intent exclusive lock then that UPDATE statement will need to wait until the SELECT statement releases its shared locks. I am trying to test this SELECT shared lock thing by doing a BEGIN TRAN and then running a SELECT, not COMMITing, and then running an UPDATE in another session on the exact same row. The UPDATE worked fine -- no lock, no wait. So this must not be a valid way to simulate a shared lock blocking an intent exclusive lock? Can you give me a scenario where I can create a lock with a SELECT that would force an UPDATE to wait? I'm working with SQL Server 2000 and 2005 across a linked server: the table is on the 2005 instance, the select is happening on 2000, and the update is executed from 2005. All in SSMS 2005.

    Read the article

  • Binary Log Format in MySQL

    - by amritansu
    Reference manual for MySQL 5.6 states that " Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE. " Does this statement means that even if we have ROW format for binary logs all DDLs will be logged in binary log as statement based? How does this affect replication? Kindly help me to understand this.

    Read the article

  • Quartz Thread Execution Parallel or Sequential?

    - by vikas
    We have a quartz based scheduler application which runs about 1000 jobs per minute which are evenly distributed across seconds of each minute i.e. about 16-17 jobs per second. Ideally, these 16-17 jobs should fire at same time, however our first statement, which simply logs the time of execution, of execute method of the job is being called very late. e.g. let us assume we have 1000 jobs scheduled per minute from 05:00 to 05:04. So, ideally the job which is scheduled at 05:03:50 should have logged the first statement of the execute method at 05:03:50, however, it is doing it at about 05:06:38. I have tracked down the time taken by the scheduled job which comes around 15-20 milliseconds. This scheduled job is fast enough because we just send a message on an ActiveMQ queue. We have specified the number of threads of quartz to be 100 and even tried with increasing it to 200 and more, but no gain. One more thing we noticed is that logs from scheduler are coming sequential after first 1 minute i.e. [Quartz_Worker_28] <Some log statement> .. .. [Quartz_Worker_29] <Some log statement> .. .. [Quartz_Worker_30] <Some log statement> .. .. So it suggesting that after some time quartz is running threads almost sequential. May be this is happening due to the time taken in notifying the job completion to persistence store (which is a separate postgres database in this case) and/or context switching. What can be the reason behind this strange behavior? EDIT: More detailed Log [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] fired job [<job_name>] scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute begin--------- ScheduledLocateJob with key: <job_name> started at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute end--------- ScheduledLocateJob with key: <job_name> ended at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] completed firing job [<job_name>] with resulting trigger instruction code: DO NOTHING. Next scheduled at: 06-07-2012 10:34:53.000 I am doubting on this section of the above log scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 because this job was scheduled for 10:04:53, but it fired at 10:08:33 and still quartz didn't consider it as misfire. Shouldn't it be a misfire?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >