Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 317/838 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • courier-imap w mysql : share a folder between 2 virtual users on debian

    - by Michael
    Hi I have a working courier-imap server on my Debian Etch private server ; users are virtual, authentificatin goes through mysql. It's been working good for years. I would like to share an imap folder between 2 users. I thought I would just have to do something like this : cd path/to/mailusers/dir ln -s path/to/user1/maildir/.folder_to_be_synched path/to/user2/maildir/ After I entered the command, I found that user2 saw a new folder in its imap client, but the folder appeared empty. It is not a permission problem because all the virtual users have the same permissions on the file system. Any idea what I could do ? thanks

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • snort-mysql not starting on Ubuntu server

    - by Rsaesha
    I am following this tutorial: https://help.ubuntu.com/community/SnortIDS I've set up the database, everything has installed correctly, and I've configured the snort.conf file so it outputs to a database (with creds all filled out ok). When I run /etc/init.d/snort start, it fails but does not produce any error message other than [fail]. The last few lines of /var/log/syslog are: snort[5687]: database: must enter database name in configuration file#012 snort[5687]: FATAL ERROR: My output database line in the snort.conf file is: output database: log, mysql, user=snort password=... dbname=snort host=localhost I have tried it with the commas separating everything, putting quotes around stuff, etc. The password is only made up of letters (after I thought maybe a number was throwing it off).

    Read the article

  • Whats the best way to deal with backups for my php/mysql application

    - by spirytus
    I'm creating php application for my client and now thinking what would be the best way to do backups, automatically if possible? I don't have much experience in this area and in case something goes wrong, or if I need to migrate, I would like to have fast way of getting it all back online. I understand "something goes wrong" is a very wide term, but lets say that someone hacks my site and wipes out database and all the files. My app. is written in php/mysql and I got access to cpanel (hosted with justhost.com if that makes any difference :). I used Joomla and it has JoomlaPack that does complete backup almost automatically and in case site fails, its easy to revert, or migrate if necessary. Is there anything like that for my configuration that would make reverting/migration, easy?

    Read the article

  • FTP Server with MySQL access, and POST notification

    - by TIW
    Im looking for an FTP server solution, that we can host either internally on a dedicated server, or on Rackspace Cloud/AWS, that provides a HTTP POST notification when a file is uploaded, and allows user accounts to be created either through an API or MySQL database. There are several offerings that provide email notification - but has anyone come across anything that matches the above requirements. BrickFTP being a IaaS system is an option, but we would prefer something hosted in house. I don't believe the standard FTP servers provided with Apache can do the above ... can they?

    Read the article

  • Mysql showing some high spikes

    - by user111196
    We one mysql server suddenly the access to it becomes slow of out sudden. So I read some place they said maybe due to var size is it? I am not too sure any idea how to check the root cause of it. The cpu is like nearly 150%. Any indication on it. I have tried this so far. du -sh * 4.0K account 67M cache 4.0K cvs 16K db 8.0K empty 4.0K games 4.0K gdm 148G lib 4.0K local 16K lock 624M log 0 mail 4.0K nis 4.0K opt 4.0K preserve 400K run 298M spool 4.0K tmp 359M www 12K yp

    Read the article

  • I am trying to link my php form and my sql but having difficulties

    - by user1912599
    I am not sure what I am doing wrong as far as my php goes but I can't get my form to link with my sql. Here are the codes for my form and php code for my link to sql <?php echo displayform(); function displayForm() { $r = ''; //build it $r .='<form action="database.php" method="post">'; //table $r .=displayNiceFormBegin(); $r .=displayRow('FirstName:', '<input type="text" name="fname" id="fname"/>'); $r .=displayRow('LastName:', '<input type="text" name="lname" id="lname"/>'); $r .=displayRow('Address:', '<input type="text" name="address" id ="address"/>'); $r .=displayRow('Phone:', '<input type="text" name="phone" id ="phone"/>'); $r .=displayRow('Deparment:', '<input type="text" name="department"id="department"/>'); $r .=displayRow('', '<input type="submit" value="Submit Registration" />'); $r .=displayNiceFormEnd(); $r .='</form>'; return $r; } function displayRow($left, $right) { $r .= ''; //build it $r .='<tr>'; $r .= '<td>' . $left . '</td>'; $r .= '<td>' . $right . '</td>'; $r .='</tr>'; return $r; } function displayNiceFormBegin(){ $r .=''; //build it $r .= '<table style="background-color: beige; border: 1px dashed #999"><tr><td>'; $r .='<table style="margin:10px">'; return $r; } function displayNiceFormENd() { $r .=''; //build it $r .='</table>'; $r .='</td></tr><table>'; return $r; } ?> <?php $host="localhost"; // Host name $username="695788_ogems"; // Mysql username $password="opd69715"; // Mysql password $db_name="ottawaglandorfems_zzl_ogems"; // Database name $tbl_name=".*"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Get values from form $fname=$_POST['fname']; $lname=$_POST['lname']; $address=$_POST['address']; $phone=$_POST['phone']; $department=$_POST['deparment']; // Insert data into mysql $sql="INSERT INTO $tbl_name(FirstName,LastName,Address,Phone,Department)VALUES('$fname', '$lname', '$address','$phone','$deparment')"; $result=mysql_query($sql); // if successfully insert data into database, displays message "Successful". if($result){ echo "Successful"; echo "<BR>"; echo "<a href='ottawa-glandorfems.org/form3.php'>Back to main page</a>"; } else { echo "ERROR"; } ?> <?php // close connection mysql_close(); ?> I keep getting an error. Thank you!!!!

    Read the article

  • Help me alter this query to get the desired results - New*

    - by sandeepan
    Please dump these data first CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 6, 2, NULL), (4, 7, 2, NULL), (8, 3, 1, 1), (9, 4, 1, 1), (10, 5, 2, 2), (11, 4, 2, 2), (15, 8, 1, 3), (16, 9, 1, 3), (17, 10, 1, 4), (18, 4, 1, 4), (19, 1, 2, 5), (20, 4, 2, 5); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'), (8, 'more'), (9, 'fresh'), (10, 'second'); CREATE TABLE IF NOT EXISTS `webclasses` ( `id_wc` int(10) NOT NULL AUTO_INCREMENT, `id_author` int(10) NOT NULL, `name` varchar(50) DEFAULT NULL, PRIMARY KEY (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; INSERT INTO `webclasses` (`id_wc`, `id_author`, `name`) VALUES (1, 1, 'first class'), (2, 2, 'new class'), (3, 1, 'more fresh'), (4, 1, 'second class'), (5, 2, 'sandeepan class'); About the system - The system consists of tutors and classes. - The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. The current data dump corresponds to tutor "Sandeepan Nath" who has created classes named "first class", "more fresh", "second class" and tutor "Bob Cratchit" who has created classes "new class" and "Sandeepan class". I am trying for a search query performs AND logic on the search keywords and returns wvery such class for which the search terms are present in the class name or its tutor name To make it easy, following is the list of search terms and desired results:- Search term result classes (check the id_wc in the results) first class 1 Sandeepan Nath class 1 Sandeepan Nath 1,3 Bob Cratchit 2 Sandeepan Nath bob none Sandeepan Class 1,4,5 I have so far reached upto this query -- Two keywords search SET @tag1 = 4, @tag2 = 1; -- Setting some user variables to see where the ids go. SELECT wc.id_wc, sum( DISTINCT ( wtagrels.id_tag = @tag1 ) ) AS key_1_class_matches, sum( DISTINCT ( wtagrels.id_tag = @tag2 ) ) AS key_2_class_matches, sum( DISTINCT ( ttagrels.id_tag = @tag1 ) ) AS key_1_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = @tag2 ) ) AS key_2_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = wtagrels.id_tag ) ) AS key_class_tutor_matches FROM WebClasses as wc join all_tag_relations AS wtagrels on wc.id_wc = wtagrels.id_wc join all_tag_relations as ttagrels on (wc.id_author = ttagrels.id_tutor) WHERE ( wtagrels.id_tag = @tag1 OR wtagrels.id_tag = @tag2 OR ttagrels.id_tag = @tag1 OR ttagrels.id_tag = @tag2 ) GROUP BY wtagrels.id_wc LIMIT 0 , 20 For search with 1 or 3 terms, remove/add the variable part in this query. Tabulating my observation of the values of key_1_class_matches, key_2_class_matches,key_1_tutor_matches (say, class keys),key_2_tutor_matches for various cases (say, tutor keys). Search term expected result Observation first class 1 for class 1, all class keys+all tutor keys =1 Sandeepan Nath class 1 for class 1, one class key+ all tutor keys = 1 Sandeepan Nath 1,3 both tutor keys =1 for these classes Bob Cratchit 2 both tutor keys = 1 Sandeepan Nath bob none no complete tutor matches for any class I found a pattern that, for any case, the class(es) which should appear in the result have the highest number of matches (all class keys and tutor keys). E.g. searching "first class", only for class =1, total of key matches = 4(1+1+1+1) searching "Sandeepan Nath", for classes 1, 3,4(all classes by Sandeepan Nath) have all the tutor keys matching. But no pattern in the search for "Sandeepan Class" - classes 1,4,5 should match. Now, how do I put a condition into the query, based on that pattern so that only those classes are returned. Do I need to use full text search here because it gives a scoring/rank value indicating the strength of the match? Any sample query would help. Please note - I have already found solution for showing classes when any/all of the search terms match with the class name. http://stackoverflow.com/questions/3030022/mysql-help-me-alter-this-search-query-to-get-desired-results But if all the search terms are in tutor name, it does not work. So, I am modifying the query and experimenting.

    Read the article

  • Help me alter this query to get the desired results

    - by sandeepan
    Please dump these data first CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=19 ; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 6, 2, NULL), (4, 7, 2, NULL), (8, 3, 1, 1), (9, 4, 1, 1), (10, 5, 2, 2), (11, 4, 2, 2), (15, 8, 1, 3), (16, 9, 1, 3), (17, 10, 1, 4), (18, 4, 1, 4), (19, 1, 2, 5), (20, 4, 2, 5); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=11 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'), (8, 'more'), (9, 'fresh'), (10, 'second'); CREATE TABLE IF NOT EXISTS `webclasses` ( `id_wc` int(10) NOT NULL AUTO_INCREMENT, `id_author` int(10) NOT NULL, `name` varchar(50) DEFAULT NULL, PRIMARY KEY (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; INSERT INTO `webclasses` (`id_wc`, `id_author`, `name`) VALUES (1, 1, 'first class'), (2, 2, 'new class'), (3, 1, 'more fresh'), (4, 1, 'second class'), (5, 2, 'sandeepan class'); About the system - The system consists of tutors and classes. - The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. The current data dump corresponds to tutor "Sandeepan Nath" who has created classes named "first class", "more fresh", "second class" and tutor "Bob Cratchit" who has created classes "new class" and "Sandeepan class". I am trying for a search query performs AND logic on the search keywords and returns wvery such class for which the search terms are present in the class name or its tutor name To make it easy, following is the list of search terms and desired results:- Search term result classes (check the id_wc in the results) first class 1 Sandeepan Nath class 1 Sandeepan Nath 1,3 Bob Cratchit 2 Sandeepan Nath bob none Sandeepan Class 1,4,5 I have so far reached upto this query -- Two keywords search SET @tag1 = 4, @tag2 = 1; -- Setting some user variables to see where the ids go. SELECT wc.id_wc, sum( DISTINCT ( wtagrels.id_tag = @tag1 ) ) AS key_1_class_matches, sum( DISTINCT ( wtagrels.id_tag = @tag2 ) ) AS key_2_class_matches, sum( DISTINCT ( ttagrels.id_tag = @tag1 ) ) AS key_1_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = @tag2 ) ) AS key_2_tutor_matches, sum( DISTINCT ( ttagrels.id_tag = wtagrels.id_tag ) ) AS key_class_tutor_matches FROM WebClasses as wc join all_tag_relations AS wtagrels on wc.id_wc = wtagrels.id_wc join all_tag_relations as ttagrels on (wc.id_author = ttagrels.id_tutor) WHERE ( wtagrels.id_tag = @tag1 OR wtagrels.id_tag = @tag2 OR ttagrels.id_tag = @tag1 OR ttagrels.id_tag = @tag2 ) GROUP BY wtagrels.id_wc LIMIT 0 , 20 For search with 1 or 3 terms, remove/add the variable part in this query. Tabulating my observation of the values of key_1_class_matches, key_2_class_matches,key_1_tutor_matches (say, class keys),key_2_tutor_matches for various cases (say, tutor keys). Search term expected result Observation first class 1 for class 1, all class keys+all tutor keys =1 Sandeepan Nath class 1 for class 1, one class key+ all tutor keys = 1 Sandeepan Nath 1,3 both tutor keys =1 for these classes Bob Cratchit 2 both tutor keys = 1 Sandeepan Nath bob none no complete tutor matches for any class I found a pattern that, for any case, the class(es) which should appear in the result have the highest number of matches (all class keys and tutor keys). E.g. searching "first class", only for class =1, total of key matches = 4(1+1+1+1) searching "Sandeepan Nath", for classes 1, 3,4(all classes by Sandeepan Nath) have all the tutor keys matching. But no pattern in the search for "Sandeepan Class" - classes 1,4,5 should match. Now, how do I put a condition into the query, based on that pattern so that only those classes are returned. Do I need to use full text search here because it gives a scoring/rank value indicating the strength of the match? Any sample query would help. Please note - I have already found solution for showing classes when any/all of the search terms match with the class name. http://stackoverflow.com/questions/3030022/mysql-help-me-alter-this-search-query-to-get-desired-results But if all the search terms are in tutor name, it does not work. So, I am modifying the query and experimenting.

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • Value gets changed upon comiting. | CallableStatements

    - by Triztian
    Hello, I having a weird problem with a DAO class and a StoredProcedure, what is happening is that I use a CallableStatement object which takes 15 IN parameters, the value of the field id_color is retrieved correctly from the HTML forms it even is set up how it should in the CallableStatement setter methods, but the moment it is sent to the database the id_color is overwriten by the value 3 here's the "context": I have the following class DAO.CoverDAO which handles the CRUD operations of this table CREATE TABLE `cover_details` ( `refno` int(10) unsigned NOT NULL AUTO_INCREMENT, `shape` tinyint(3) unsigned NOT NULL , `id_color` tinyint(3) unsigned NOT NULL ', `reversefold` bit(1) NOT NULL DEFAULT b'0' , `x` decimal(6,3) unsigned NOT NULL , `y` decimal(6,3) unsigned NOT NULL DEFAULT '0.000', `typecut` varchar(10) NOT NULL, `cornershape` varchar(20) NOT NULL, `z` decimal(6,3) unsigned DEFAULT '0.000' , `othercornerradius` decimal(6,3) unsigned DEFAULT '0.000'', `skirt` decimal(5,3) unsigned NOT NULL DEFAULT '7.000', `foamTaper` varchar(3) NOT NULL, `foamDensity` decimal(2,1) unsigned NOT NULL , `straplocation` char(1) NOT NULL ', `straplength` decimal(6,3) unsigned NOT NULL, `strapinset` decimal(6,3) unsigned NOT NULL, `spayear` varchar(20) DEFAULT 'Not Specified', `spamake` varchar(20) DEFAULT 'Not Specified', `spabrand` varchar(20) DEFAULT 'Not Specified', PRIMARY KEY (`refno`) ) ENGINE=MyISAM AUTO_INCREMENT=143 DEFAULT CHARSET=latin1 $$ The the way covers are being inserted is by a stored procedure, which is the following: CREATE DEFINER=`root`@`%` PROCEDURE `putCover`( IN shape TINYINT, IN color TINYINT(3), IN reverse_fold BIT, IN x DECIMAL(6,3), IN y DECIMAL(6,3), IN type_cut VARCHAR(10), IN corner_shape VARCHAR(10), IN cutsize DECIMAL(6,3), IN corner_radius DECIMAL(6,3), IN skirt DECIMAL(5,3), IN foam_taper VARCHAR(7), IN foam_density DECIMAL(2,1), IN strap_location CHAR(1), IN strap_length DECIMAL(6,3), IN strap_inset DECIMAL(6,3) ) BEGIN INSERT INTO `dbre`.`cover_details` (`dbre`.`cover_details`.`shape`, `dbre`.`cover_details`.`id_color`, `dbre`.`cover_details`.`reversefold`, `dbre`.`cover_details`.`x`, `dbre`.`cover_details`.`y`, `dbre`.`cover_details`.`typecut`, `dbre`.`cover_details`.`cornershape`, `dbre`.`cover_details`.`z`, `dbre`.`cover_details`.`othercornerradius`, `dbre`.`cover_details`.`skirt`, `dbre`.`cover_details`.`foamTaper`, `dbre`.`cover_details`.`foamDensity`, `dbre`.`cover_details`.`strapLocation`, `dbre`.`cover_details`.`strapInset`, `dbre`.`cover_details`.`strapLength` ) VALUES (shape,color,reverse_fold, x,y,type_cut,corner_shape, cutsize,corner_radius,skirt,foam_taper,foam_density, strap_location,strap_inset,strap_length); END As you can see basically it just fills each field, now, the CoverDAO.create(CoverDTO cover) method which creates the cover is like so: public void create(CoverDTO cover) throws DAOException { Connection link = null; CallableStatement query = null; try { link = MySQL.getConnection(); link.setAutoCommit(false); query = link.prepareCall("{CALL putCover(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"); query.setLong(1,cover.getShape()); query.setInt(2,cover.getColor()); query.setBoolean(3, cover.getReverseFold()); query.setBigDecimal(4,cover.getX()); query.setBigDecimal(5,cover.getY()); query.setString(6,cover.getTypeCut()); query.setString(7,cover.getCornerShape()); query.setBigDecimal(8, cover.getZ()); query.setBigDecimal(9, cover.getCornerRadius()); query.setBigDecimal(10, cover.getSkirt()); query.setString(11, cover.getFoamTaper()); query.setBigDecimal(12, cover.getFoamDensity()); query.setString(13, cover.getStrapLocation()); query.setBigDecimal(14, cover.getStrapLength()); query.setBigDecimal(15, cover.getStrapInset()); query.executeUpdate(); link.commit(); } catch (SQLException e) { throw new DAOException(e); } finally { close(link, query); } } The CoverDTO is made of accessesor methods, the MySQL object basically returns the connection from a pool. Here is the pset Query with dummy but appropriate data: putCover(1,10,0,80.000,80.000,'F','Cut',0.000,0,15.000,'4x2',1.5,'A',10.000,5.000) As you can see everything is fine just when I write to the DB instead of 10 in the second parameter a 3 is written. I have done the following: Traced the id_color value to the create method, still got replaced by a 3 Hardcoded the value in the DAO create method, still got replaced by a 3 Called the procedure from the MySQL Workbench, it worked fined so I assume something is happening in the create method, any help is really appreciated.

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • Having a problem inserting a foreign key data into a table using a PHP form

    - by Gideon
    I am newbee with PHP and MySQL and need help... I have two tables (Staff and Position) in the database. The Staff table (StaffId, Name, PositionID (fk)). The Position table is populated with different positions (Manager, Supervisor, and so on). The two tables are linked with a PositionID foreign key in the Staff table. I have a staff registration form with textfields asking for the relevant attributes and a dynamically populated drop down list to choose the position. I need to insert the user's entry into the staff table along with the selected position. However, when inserting the data, I get the following error (Cannot add or update a child row: a foreign key constraint fails). How do I insert the position selected by the user into the staff table? Here is some of my code... ... echo "<tr>"; echo "<td>"; echo "*Position:"; echo "</td>"; echo "<td>"; //dynamically populate the staff position drop down list from the position table $position="SELECT PositionId, PositionName FROM Position ORDER BY PositionId"; $exeposition = mysql_query ($position) or die (mysql_error()); echo "<select name=position value=''>Select Position</option>"; while($positionarray=mysql_fetch_array($exeposition)) { echo "<option value=$positionarray[PositionId]>$positionarray[PositionName]</option>"; } echo "</select>"; echo "</td>"; echo "</tr>" //the form is processed with the code below $FirstName = $_POST['firstname']; $LastName = $_POST['lastname']; $Address = $_POST['address']; $City = $_POST['city']; $PostCode = $_POST['postcode']; $Country = $_POST['country']; $Email = $_POST['email']; $Password = $_POST['password']; $ConfirmPass = $_POST['confirmpass']; $Mobile = $_POST['mobile']; $NI = $_POST['nationalinsurance']; $PositionId = $_POST[$positionarray['PositionId']]; //format the dob for the database $dobpart = explode("/", $_POST['dob']); $formatdob = $dobpart[2]."-".$dobpart[1]."-".$dobpart[0]; $DOB = date("Y-m-d", strtotime($formatdob)); $newReg = "INSERT INTO Staff (FirstName, LastName, Address, City, PostCode, Country, Email, Password, Mobile, DOB, NI, PositionId) VALUES ('".$FirstName."', '".$LastName."', '".$Address."', '".$City."', '".$PostCode."', '".$Country."', '".$Email."', '".$Password."', ".$Mobile.", '".$DOB."', '".$NI."', '".$PostionId."')"; Your time and help is surely appreciated.

    Read the article

  • Magento - Data is not inserted into database, but the id is autoincremented

    - by Joseph
    I am working on a new payment module for Magento and have come across an issue that I cannot explain. The following code that runs after the credit card is verified: $table_prefix = Mage::getConfig()->getTablePrefix(); $tableName = $table_prefix.'authorizecim_magento_id_link'; $resource = Mage::getSingleton('core/resource'); $writeconnection = $resource->getConnection('core_write'); $acPI = $this->_an_customerProfileId; $acAI = $this->_an_customerAddressId; $acPPI = $this->_an_customerPaymentProfileId; $sql = "insert into {$tableName} values ('', '$customerId', '$acPI', '$acPI', '3')"; $writeconnection->query($sql); $sql = "insert into {$tableName} (magCID, anCID, anOID, anObjectType) values ('$customerId', '$acPI', '$acAI', '2')"; $writeconnection->query($sql); $sql = "insert into {$tableName} (magCID, anCID, anOID, anObjectType) values ('$customerId', '$acPI', '$acPPI', '1')"; $writeconnection->query($sql); I have verified using Firebug and FirePHP that the SQL queries are syntactically correct and no errors are returned. The odd thing here is that I have checked the database, and the autoincrement value is incremented on every run of the code. However, no rows are inserted in the database. I have verified this by adding a die(); statement directly after the first write. Any ideas why this would be occuring? The relative portion of the config.xml is this: <config> <global> <models> <authorizecim> <class>CPAP_AuthorizeCim_Model</class> </authorizecim> <authorizecim_mysql4> <class>CPAP_AuthorizeCim_Model_Mysql4</class> <entities> <anlink> <table>authorizecim_magento_id_link</table> </anlink> </entities> <entities> <antypes> <table>authorizecim_magento_types</table> </antypes> </entities> </authorizecim_mysql4> </models> <resources> <authorizecim_setup> <setup> <module>CPAP_AuthorizeCim</module> <class>CPAP_AuthorizeCim_Model_Resource_Mysql4_Setup</class> </setup> <connection> <use>core_setup</use> </connection> </authorizecim_setup> <authorizecim_write> <connection> <use>core_write</use> </connection> </authorizecim_write> <authorizecim_read> <connection> <use>core_read</use> </connection> </authorizecim_read> </resources> </global> </config>

    Read the article

  • java.lang.ClassNotFoundException: com.mysql.jdbc.Driver When run jar file

    - by user1024858
    Hi all, Connection conn = null; String url = "jdbc:mysql://localhost:3306/"; String dbName = "test"; String driver = "com.mysql.jdbc.Driver"; String userName = "root"; String password = "admin"; try { Class.forName(driver).newInstance(); conn = DriverManager.getConnection(url+dbName,userName,password); System.out.println("Connected to the database"); conn.close(); System.out.println("Disconnected from database"); } catch (Exception e) { e.printStackTrace();} I run in eclipse it's ok, but i built to jar file and run on command line java -jar Test.jar i get this error: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) Please help me how to fix it. Thanks!!!

    Read the article

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

  • PasswordFiled char[] to String in Java connection MySql?

    - by user1819551
    This is a jFrame to connect to the database and this is in the button connect. My issue is in the passwordField NetBeans make me do a char[], but my .getConnection not let me insert the char[] ERROR: "no suitable method found for getConnection(String,String,char[])". So I will change to String right? So when I change and run the jFrame said access denied. when I start doing the System.out.println(l) " Give me the right answer" Like this: "Alex". But when I do the System.out.println(password) "Give me the Array spaces and not the value" Like this: jdbc:mysql://localhost/home inventory root [C@5be5ab68 <--- Array space . What I doing wrong? try { Class.forName("org.gjt.mm.mysql.Driver"); //Load the driver String host = "jdbc:mysql://"+tServerHost.getText()+"/"+tSchema.getText(); String uName = tUsername.getText(); char[] l = pPassword.getPassword(); System.out.println(l); String password= l.toString(); System.out.println(host+uName+l); con = DriverManager.getConnection(host, uName, password); System.out.println(host+uName+password); } catch (SQLException | ClassNotFoundException ex) { JOptionPane.showMessageDialog(this, ex.getMessage()); } }

    Read the article

  • MySQL2 Ruby gem will not Install 10.6

    - by Kish
    I know this has been asked several times, but I searched and I tried many different things and nothing worked. ERROR: Error installing mysql2: ERROR: Failed to build gem native extension. /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for mysql.h... yes checking for errmsg.h... yes checking for mysqld_error.h... yes creating Makefile make gcc -I. -I/Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/x86_64-darwin10.6.0 -I/Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/backward -I/Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1 -I. -DHAVE_RB_THREAD_BLOCKING_REGION -DHAVE_MYSQL_H -DHAVE_ERRMSG_H -DHAVE_MYSQLD_ERROR_H -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -I/usr/local/mysql/include -Os -g -fno-common -fno-strict-aliasing -arch i386 -fno-common -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wshorten-64-to-32 -Wno-long-long -fno-common -pipe -Wall -funroll-loops -o client.o -c client.c In file included from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby.h:32, from ./mysql2_ext.h:4, from client.c:1: /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/ruby.h:108: error: size of array ‘ruby_check_sizeof_long’ is negative /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/ruby.h:112: error: size of array ‘ruby_check_sizeof_voidp’ is negative In file included from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/intern.h:29, from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/ruby.h:1327, from /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby.h:32, from ./mysql2_ext.h:4, from client.c:1: /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/include/ruby-1.9.1/ruby/st.h:69: error: size of array ‘st_check_for_sizeof_st_index_t’ is negative make: *** [client.o] Error 1 Gem files will remain installed in /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/mysql2-0.2.6 for inspection. Results logged to /Users/kishinmanglani/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/mysql2-0.2.6/ext/mysql2/gem_make.out

    Read the article

  • Call to undefined function query() - using PDO in PHP

    - by webworm
    I have a written a script in PHP that reads from data from a MySQL database. I am new to PHP and I have been using the PDO library. I have been developing on a Windows machine using the WAMP Server 2 and all has been working well. However, when I uploaded my script to the LINUX server where it will be used I get the following error when I run my script. Fatal error: Call to undefined function query() This is the line where the error is occuring ... foreach($dbconn->query($sql) as $row) The variable $dbconn is first defined in my dblogin.php include file which I am listing below. <?php // Login info for the database $db_hostname = 'localhost'; $db_database = 'MY_DATABASE_NAME'; $db_username = 'MY_DATABASE_USER'; $db_password = 'MY_DATABASE_PASSWORD'; $dsn = 'mysql:host=' . $db_hostname . ';dbname=' . $db_database . ';'; try { $dbconn = new PDO($dsn, $db_username, $db_password); $dbconn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { echo 'Error connecting to database: ' . $e->getMessage(); } ?> Inside the function where the error occurs I have the database connection defined as a superglobal as such ... global $dbconn; I am a bit confused as to what is happening since all worked well on my development machine. I was wondering if the PDO library was installed but from what I thought it was installed by default as part of PHP v5. The script is running on an Ubuntu (5.0.51a-3ubuntu5.4) machine and the PHP version is 5.2.4. Thank you for any suggestions. I am really lost on this.

    Read the article

  • PHP's PDO Prepare Method Fails While in a Loop

    - by Andrew G. Johnson
    Given the following code: // Connect to MySQL up here $example_query = $database->prepare('SELECT * FROM table2'); if ($example_query === false) die('prepare failed'); $query = $database->prepare('SELECT * FROM table1'); $query->execute(); while ($results = $query->fetch()) { $example_query = $database->prepare('SELECT * FROM table2'); if ($example_query === false) die('prepare failed'); //it will die here } I obviously attempt to prepare statements to SELET everything from table2 twice. The second one (the one in the WHILE loop) always fails and causes an error. This is only on my production server, developing locally I have no issues so it must be some kind of setting somewhere. My immediate thought is that MySQL has some kind of max_connections setting that is set to 1 and a connection is kept open until the WHILE loop is completed so when I try to prepare a new query it says "nope too many connected already" and shits out. Any ideas? EDIT: Yes I know there's no need to do it twice, in my actual code it only gets prepared in the WHILE loop, but like I said that fails on my production server so after some testing I discovered that a simple SELECT * query fails in the WHILE loop but not out of it. The example code I gave is obviously very watered down to just illustrate the issue.

    Read the article

  • PHP File upload fails with blank screen when size exceeds 742 KB.

    - by user284839
    Hi friends, This is one of the most bizarre errors I've come across. So I've written a little file upload web app for my friend and it works fine for any file less than or equal to 742kB in size. Needless to say, I arrived at this precise number based on relentless testing. Weird part is that if the file size is just a few KB more, for example 743 or 750, I get an error saying "MySQL has gone away". But if it is 1MB or more, then I just get a blank screen. And it happens in less than 2 seconds after I hit the upload button. So it doesn't look like a time-out to me. I checked out the PHP.ini file for post size and upload size, they are all set to 5 MB or more. And the timeout is set to 60 seconds. The uploaded file sits in MySQL database in a field of datatype mediumblob. I tried changing that to longblob. But that didn't help either. Any help? Thanks for reading, Girish

    Read the article

  • Request for the permission of type 'System.Data.Odbc.OdbcPermission.. help needed

    - by Matt
    I'm getting the following error when trying to connect to a remote mysql server. Request for the permission of type 'System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've installed the odbc 5.1 driver, and can connect to the database using the Data Sources (ODBC) tool in Control Panel. However when I try and run my C# scrip to connect, I get the above error. I've read its something to do with trust levels or something? I don't quite understand what people were talking about though. I went to C:... Framework/v2.0.50727/CONFIG and added to the medium and high trust.config files, but that didn't help.. Can someone help me out here please? My connection string is MyConString = "DRIVER={MySQL ODBC 5.1 Driver};" + "SERVER=" + m_strHost + ";" + "PORT=3306;" + "DATABASE=" + m_strDatabase + ";" + "UID=" + m_strUserName + ";" + "PWD=" + m_strPassword + ";" + "OPTION=3;";

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Call Multiple Stored Procedures with the Zend Framework

    - by Brian Fisher
    I'm using Zend Framework 1.7.2, MySQL and the MySQLi PDO adapter. I would like to call multiple stored procedures during a given action. I've found that on Windows there is a problem calling multiple stored procedures. If you try it you get the following error message: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. I found that to work around this issue I could just close the connection to the database after each call to a stored procedure: if (strtoupper(substr(PHP_OS, 0, 3)) === 'WIN') { //If on windows close the connection $db->closeConnection(); } This has worked well for me, however, now I want to call multiple stored procedures wrapped in a transaction. Of course, closing the connection isn't an option in this situation, since it causes a rollback of the open transaction. Any ideas, how to fix this problem and/or work around the issue. More info about the work around Bug report about the problem

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >