Search Results

Search found 31499 results on 1260 pages for 'database theory'.

Page 326/1260 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

  • Fluent nHibernate - How to map a non-key column on a junction table?

    - by The Matt
    Taking an example that is provided on the Fluent nHibernate website, I need to extend it slightly: I need to add a 'Quantity' column to the StoreProduct table. How would I map this using nHibernate? An example mapping is provided for the given scenario above, but I'm not sure how I would get the Quantity column to map to a property on the Product class: public class StoreMap : ClassMap<Store> { public StoreMap() { Id(x => x.Id); Map(x => x.Name); HasMany(x => x.Employee) .Inverse() .Cascade.All(); HasManyToMany(x => x.Products) .Cascade.All() .Table("StoreProduct"); } }

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • Multimap Space Issue: Guava

    - by Arpssss
    In my Java code, I am using Guavas Multimap (com.google.common.collect.Multimap) by using this: Multimap< Integer, Integer Index = HashMultimap.create() Here, Multimap key is some portion of URL and value is another portion of URL (converted into integer). Now, I assign my JVM 2560 Mb (2.5 GB) heap space (by using Xmx and Xms). However, it can only store 9 millions of such (key,value) pairs of integers (approx 10 million). But, theoretically (according to memory occupied by int) it should store more. Can anybody help me, 1) Why is this happening (means Multimap is taking lots of space) ? I checked my code with out inserting pairs in Multimap, it takes only 1/2 MB space. 2) Is there any other way or home-baked solution to solve this space issue ? More clearly, how to solve this memory issue ? Thanks in advance and any idea is perfectly OK for me.

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Can't create a MySQL query that generates 4 rows for each row in the table it references.

    - by UkraineTrain
    I need to create a MySQL query that generates 4 rows for each row in the table it references. I need some of the information in those rows to repeat and some to be different. In the table each row stands for one day. I need to break the day up in 6 hour increments, hence the four rows for each entry. I need to create one column which for each day will have '12AM', '6AM', '12PM', and '6PM' values and another column will have the corresponding numeric values calculated for those entries. Thanks a lot in advance and I will really appreciate any help on this.

    Read the article

  • Walking through an SQLite Table

    - by galford13x
    I would like to implement or use functionality that allows stepping through a Table in SQLite. If I have a Table Products that has 100k rows, I would like to retrive perhaps 10k rows at a time. Somthing similar to how a webpage would list data and have a < Previous .. Next > link to walk through the data. Are there select statements that can make this simple? I see and have tried using the ROWID in conjunction with LIMIT which seems ok if not ordering the data. // This seems works if not ordering. SELECT * FROM Products WHERE ROWID BETWEEN x AND y;

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • Synchronizing MySQL databases

    - by Wasim
    Hi all, I have a maintanance problem synchronizing my MySQL data bases . These are the databases I have : My development DB : Here I make my curret development changes . Staging DB : I need to make all the changes I did in the development on it before using , currently I hold migration scripts for structure and data. Production DB : A production environment . Have to do exacly the same as the staging . My problem , is sync. the structure , and some of the data. This is really a very hard work to maintaine. Is there any technics , tools to do with MySQL . What is replication , is it good for my situation , how to use it . Thanks in advance ...

    Read the article

  • make db connection persistent throught zend framework

    - by kamikaze_pilot
    I'm using zend framework. currently everytime I need to use the db I go ahead and connect to the DB: function connect(){ $connParams = array("host" => $host, "port" => $port, "username" => $username, "password" => $password, "dbname" => $dbname); $db = new Zend_Db_Adapter_Pdo_Mysql($connParams); return $db } so I would just call the connect() function everytime I need to use the db My question is...suppose I want to reuse $db everywhere in my site and only connect once in the very initial stage of the site load and then close the connection right before the site gets sent to the user, what would be the best practice to accomplish this? Which file in Zend should I save $db in, what method should I use to save it (global variable?), and which file should I do the connection closing in?

    Read the article

  • Mysql- "FLUSH TABLES WITH READ LOCK" started automatically

    - by ming yeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • how to get transform a local image to a web accessable image

    - by hguser
    Hi: Generally,if we want to display a image in the web page,we give the uri of the image resource like: http://host:port/image/xxx.jpg. Now,there are some images in my file system,and I save its absolute path in the db. Like this; id name address image 1 xxxx xxxx C:/images/xxx.jpg Now if the entity is retrived,its image should be displayed in the page. How to make it? What I thought is copy the image under the web server dir,then build its url,then the page can render it. But I wonder if this is a good idea? Is there any other way?

    Read the article

  • Hosting an Access DB

    - by Mitciv
    Hey, So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup. I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search. What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases. Thanks

    Read the article

  • DB Design - Linking to a parent without circular reference issues

    - by zSysop
    Hi all, I'm having trouble coming up with a solution for the following issue. Lets say i have a db that looks something like the following: Issue Table Id | Details | CreateDate | ClosedDate Issue Notes Table Id | ObjectId | Notes | NoteDate Issue Assignment Table Id | ObjectId | AssignedToId| AssignedDate I'd like allow the linking of an issue to another issue. I thought about adding a column to the Issue table called ParentIssueId and that would allow me the ability to link issues, but i foresee circular references occurring within the issue table if i go through with this implementation. Is there a better way to go about doing this, and if so, how? Thanks

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • sqlserver how to set job priority

    - by Buzz
    Is there any way to set one job priority higher then other, In my case there are two jobs those are working on same set of tables,first JOB-A which is running every 12 hr and other JOB-B is every 10 minutes , i think at some time when they run simultaneously JOB-B is getting in to deadlock and get failed, i google the topic and found sqlgoverner is helpful in such cases does anyone know how to resolve?

    Read the article

  • Question about Benchmark funcion in Mysql ( Incredible results ).

    - by xRobot
    I have 2 tables: author with 3 millions of rows. book with 20 miles rows. . So I have benchmarked this query with a join: SELECT BENCHMARK(100000000, 'SELECT book.title, author.name FROM `book` , `author` WHERE book.id = author.book_id ') And this is the result: Query took 0.7438 sec ONLY 0.7438 seconds for 100 millions of query with a join ??? Do I make some mistakes or this is the right result ?

    Read the article

  • How to undo SQL changes using installer

    - by Sunil Agarwal
    I have installer to install procedures, scripts, views, etc in SQL server 2005/2008. Now I want to add a condition in the installer like if there is any error while installing, I want to undo all the changes done in SQL server. I tried to store the procedures, views, etc which I am changing while installing and reverting them back if I get any error. But am not able to do it the way I want. Can someone guide me if he had done the same thing? To specify I am using WIX installer. Also if someone has tried SMO, it will be of great help.

    Read the article

  • Alter multiple tables' columns length

    - by gdoron
    So, we just found out that 254 tables in our Oracle DBMS have one column named "Foo" with the wrong length- Number(10) instead of Number(3). That foo column is a part from the PK of the tables. Those tables have other tables with forigen keys to it. What I did is: backed-up the table with a temp table. Disabled the forigen keys to the table. Disabled the PK with the foo column. Nulled the foo column for all the rows. Restored all the above But now we found out it's not just couple of tables but 254 tables. Is there an easy way, (or at least easier than this) to alter the columns length? P.S. I have DBA permissions.

    Read the article

  • Any tools or techniques for validating constraints programmatically between databases?

    - by Brandon
    If you had two databases, that had two tables between them that would normally implement a one to one (or many to many) constraint but cannot since they are separate databases, how would you validate this relationship in an application or test? Is there a simple way to do this? For example, a tool or technique that can, given a constraint type, tables and fields, does the validation. I imagine that this isn't the first time this come up so I'm hoping people can share their solution. Thanks.

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >