Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 324/1255 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Will MySql caching cause performance problems?

    - by Camran
    I am about to upload my website onto a VPS. It is a classifieds website, where all data is stored in MySql and Solr. I wonder if when using MySql:s cache, the server will slow down? Ie, if somebody makes a search for the first time, and MySql is to cache the query, will the caching make the server slower than if it would not cache anything? After the caching is done I know things will improve in terms of performance... But I would like to know if I should even use the cache or not, what do you think? Thanks

    Read the article

  • Using jQuery to store basic text string in mySQL base?

    - by Kenny Bones
    Could someone point me in the right direction here? Basically, I've got this jQuery code snippet: $('.bggallery_images').click(function () { var newBG = "url('" + $(this).attr('src'); var fullpath = $(this).attr('src'); var filename = fullpath.replace('img/Bakgrunner/', ''); $('#wrapper').css('background-image', newBG); // Lagre til SQL $.ajax({ url: "save_to_db.php", // The url to your function to handle saving to the db data: filename, dataType: 'Text', type: 'POST', // Could also use GET if you prefer success: function (data) { // Just for testing purposes. alert('Background changed to: ' + data); } }); }); This is being run when I click a certain button. So it's actually within a click handler. If I understand this correctly, this snippet takes the source if the image I just clicked and strips it so I end up with only the filename. If I do an alert(filename), I get the filename only. So this is working ok. But then, it does an ajax call to a php file called "save_to_db.php" and sends data: filename. This is correct right? Then, it does a callback which does an alert + data. Does this seem correct so far? Cause my php file looks like this: <?php require("dbconnect2.php"); $uploadstring = $_POST['filename']; $sessionid = $_SESSION['id']; echo ($sessionid); mysql_query("UPDATE brukere SET brukerBakgrunn = '$uploadstring' WHERE brukerID=" .$_SESSION['id']); mysql_close(); ?> When I click the image, the jQuery snippet fires and I get the results of this php file as output for the alert box. I think that the variables somehow are empty. Because notice the echo($sessionid); which is a variable I've created just to test what the session ID is. And it returns nothing. What could be the issue here? Edit: I just tried to echo out the $uploadstring variable as well and it also returns nothing. It's like the jQuery snippet doesn't even pass the variable on to the php file?

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • MySQL "NULL" questions

    - by Camran
    I have a table with several columns. Sometimes some of these column fields may be empty (ie. I won't use them in some cases). My questions: Would it be smart to set them to NULL in phpmyadmin? What does the "NULL" property actually do? Would I gain anything at all by setting them to NULL? Is it possible to use a NULL field the same way even though it is set to null?

    Read the article

  • What's the best way to test SQL Server connection programmatically?

    - by backslash17
    I need to develop a single routine that will be fired each 5 minutes to check if a list of SQL Servers (10 to 12) are up and running. I can try to obtain a simple query in each one of the servers but this means that I have to create a table, view or stored procedure in every server, even if I use any already made SP I need to have a registered user in each server too. The servers are not in the same physical location so having those requirements would be a complex task. Is there a way to simply "ping" from C# one SQL Server? Thanks in advance!

    Read the article

  • Mysql- "FLUSH TABLES WITH READ LOCK" started automatically

    - by ming yeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • How to handle expired items?

    - by Mark
    My site allows users to post things on the site with an expiry date. Once the item has expired, it will no longer be displayed in the listings. Posts can also be closed, canceled, or completed. I think it would be be nicest just to be able to check for one attribute or status ("is active") rather than having to check for [is not expired, is not completed, is not closed, is not canceled]. Handling the rest of those is easy because I can just have one "status" field which is essentially an enum, but AFAIK, it's impossible to set the status to "expired" as soon as that time occurs. How do people typically handle this?

    Read the article

  • Fastest way to store/retrieve a dictionary - SQL, text file...?

    - by AP257
    Hi all, This is a really really super dumb question, so I apologise, but I'd be grateful for some advice. I've got a text file of words and word frequencies. It's very large - theoretically we're talking millions of rows. I just want to retrieve values from the file, and do it as quickly and efficiently as possible (for a web app, in Django). My question is: what is the best way to store and retrieve the values? Should import them into SQL? Or keep the file and use grep? Or put them into a JSON dictionary...? Or some other way? Sorry for the dumb question, would be very grateful for advice!

    Read the article

  • Creating a db driven primary navigation in django?

    - by Fedor
    I find that it's pretty common most people hardcode the navigation into their templates, but I'm dealing with a pretty dynamic news site which might be better off if the primary nav was db driven. So I was thinking of having a Navigation model where each row would be a link. link_id INT primary key link_name varchar(255) url varchar(255) order INT active boolean If anyone's done something similar in the past, would you say this sort of schema is good enough? I also wanted for there to be an optional dropdown in the admin near the url field so that a user could choose a Category model's slug since category links would be common, but I'm not quite sure how that would be possible.

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • Why does this simple MySQL procedure take way too long to complete?

    - by Howard Guo
    This is a very simple MySQL stored procedure. Cursor "commission" has only 3000 records, but the procedure call takes more than 30 seconds to run. Why is that? DELIMITER // DROP PROCEDURE IF EXISTS apply_credit// CREATE PROCEDURE apply_credit() BEGIN DECLARE done tinyint DEFAULT 0; DECLARE _pk_id INT; DECLARE _eid, _source VARCHAR(255); DECLARE _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count DECIMAL; DECLARE commission CURSOR FOR SELECT pk_id, eid, source, lh_revenue, acc_revenue, project_carrier_expense, carrier_lh, carrier_acc, gross_margin, fsc_revenue, revenue, load_count FROM ct_sales_commission; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; DELETE FROM debug; OPEN commission; REPEAT FETCH commission INTO _pk_id, _eid, _source, _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count; INSERT INTO debug VALUES(concat('row ', _pk_id)); UNTIL done = 1 END REPEAT; CLOSE commission; END// DELIMITER ; CALL apply_credit(); SELECT * FROM debug;

    Read the article

  • How Indices Cope with MVCC ?

    - by geeko
    Greetings Overflowers, To my understanding (and I hope I'm not right) changes to indices cannot be MVCCed. I'm wondering if this is also true with big records as copies can be costly. Since records are accessed via indices (usually), how MVCC can be effective ? Do, for e.g., indices keep track of different versions of MVCCed records ? Any recent good reading on this subject ? Really appreciated ! Regards

    Read the article

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

  • Multimap Space Issue: Guava

    - by Arpssss
    In my Java code, I am using Guavas Multimap (com.google.common.collect.Multimap) by using this: Multimap< Integer, Integer Index = HashMultimap.create() Here, Multimap key is some portion of URL and value is another portion of URL (converted into integer). Now, I assign my JVM 2560 Mb (2.5 GB) heap space (by using Xmx and Xms). However, it can only store 9 millions of such (key,value) pairs of integers (approx 10 million). But, theoretically (according to memory occupied by int) it should store more. Can anybody help me, 1) Why is this happening (means Multimap is taking lots of space) ? I checked my code with out inserting pairs in Multimap, it takes only 1/2 MB space. 2) Is there any other way or home-baked solution to solve this space issue ? More clearly, how to solve this memory issue ? Thanks in advance and any idea is perfectly OK for me.

    Read the article

  • I have data about deadlocks, but I can't understand why they occur

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL Profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second Linq-to-SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

  • Synchronizing MySQL databases

    - by Wasim
    Hi all, I have a maintanance problem synchronizing my MySQL data bases . These are the databases I have : My development DB : Here I make my curret development changes . Staging DB : I need to make all the changes I did in the development on it before using , currently I hold migration scripts for structure and data. Production DB : A production environment . Have to do exacly the same as the staging . My problem , is sync. the structure , and some of the data. This is really a very hard work to maintaine. Is there any technics , tools to do with MySQL . What is replication , is it good for my situation , how to use it . Thanks in advance ...

    Read the article

  • Get information from various sources

    - by Francesc
    Hi. I'm developing an app that has to get some information from various sources (APIs and RSS) and display it to the user in near real-time. What's the best way to get it: 1.Have a cron job to update them all accounts every 12h, and when a user is requesting one, update that account, save it to the DB and show it to the user? 2.Have a cron job to update them all accounts every 6h, and when a user is requesting one, update the account and showing it to the user without saving it to the DB? What's the best way to get it? What's faster? And what's the most scallable?

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >