Search Results

Search found 14956 results on 599 pages for 'mysql dba'.

Page 350/599 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • problem parsing JSON Strings

    - by blacktooth
    var records = JSON.parse(JsonString); for(var x=0;x<records.result.length;x++) { var record = records.result[x]; ht_text+="<b><p>"+(x+1)+" " +record.EMPID+" " +record.LOCNAME+" " +record.DEPTNAME+" " +record.CUSTNAME +"<br/><br/><div class='slide'>" +record.REPORT +"</div></p></b><br/>"; } The above code works fine when the JsonString contains an array of entities but fails for single entity. result is not identified as an array! Whats wrong with it? http://pastebin.com/hgyWw5hd

    Read the article

  • Form is trying to save the login value of the submit button to my DB.

    - by Sergio Tapia
    Here's my Zend code: <?php require_once ('Zend\Form.php'); class Sergio_Form_registrationform extends Zend_Form { public function init(){ /*********************USERNAME**********************/ $username = new Zend_Form_Element_Text('username'); $alnumValidator = new Zend_Validate_Alnum(); $username ->setRequired(true) ->setLabel('Username:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($username); /*********************EMAIL**********************/ $email = new Zend_Form_Element_Text('email'); $alnumValidator = new Zend_Validate_Alnum(); $email ->setRequired(true) ->setLabel('EMail:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($email); /*********************PASSWORD**********************/ $password = new Zend_Form_Element_Password('password'); $alnumValidator = new Zend_Validate_Alnum(); $password ->setRequired(true) ->setLabel('Password:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($password); /*********************NAME**********************/ $name = new Zend_Form_Element_Text('name'); $alnumValidator = new Zend_Validate_Alnum(); $name ->setRequired(true) ->setLabel('Name:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($name); /*********************LASTNAME**********************/ $lastname = new Zend_Form_Element_Text('lastname'); $alnumValidator = new Zend_Validate_Alnum(); $lastname ->setRequired(true) ->setLabel('Last Name:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($lastname); /*********************DATEOFBIRTH**********************/ $dateofbirth = new Zend_Form_Element_Text('dateofbirth'); $alnumValidator = new Zend_Validate_Alnum(); $dateofbirth->setRequired(true) ->setLabel('Date of Birth:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($dateofbirth); /*********************AVATAR**********************/ $avatar = new Zend_Form_Element_File('avatar'); $alnumValidator = new Zend_Validate_Alnum(); $avatar ->setRequired(true) ->setLabel('Please select a display picture:'); $this->addElement($avatar); /*********************SUBMIT**********************/ $this->addElement('submit', 'login', array('label' => 'Login')); } } ?> Here's the code I use to save the values: public function saveforminformationAction(){ $form = new Sergio_Form_registrationform(); $request = $this->getRequest(); //if($request->isPost() && $form->isValid($_POST)){ $data = $form->getValues(); $db = $this->_getParam('db'); $db->insert('user',$data); //} } When trying to save the values, I recieve a ghastly error: Column 'login' not found.

    Read the article

  • SQL Query Returning Duplicate Results

    - by Jesse Bunch
    Hi, I've been working out this query now for a while and I thought I had it where I wanted it, but apparently not. There are two records in the database (orders). The query should return two different rows, but instead returns two rows that have exactly the same values. I think it may be something to do with the GROUP BY or derived tables I'm using but my eyes are tired and not seeing the problem. Can any of you help? Thanks in advance. SELECT orders.billerID, orders.invoiceDate, orders.txnID, orders.bName, orders.bStreet1, orders.bStreet2, orders.bCity, orders.bState, orders.bZip, orders.bCountry, orders.sName, orders.sStreet1, orders.sStreet2, orders.sCity, orders.sState, orders.sZip, orders.sCountry, orders.paymentType, orders.invoiceNotes, orders.pFee, orders.shipping, orders.tax, orders.reasonCode, orders.txnType, orders.customerID, customers.firstName AS firstName, customers.lastName AS lastName, customers.businessName AS businessName, orderStatus.statusName AS orderStatus, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax AS orderTotal, IFNULL(orderItems.itemTotal, 0.00) + orders.shipping + orders.tax - IFNULL(payments.totalPayments, 0.00) AS orderBalance FROM orders LEFT JOIN customers ON orders.customerID = customers.id LEFT JOIN orderStatus ON orders.orderStatus = orderStatus.id LEFT JOIN ( SELECT orderItems.orderID, SUM(orderItems.itemPrice * orderItems.itemQuantity) as itemTotal FROM orderItems GROUP BY orderItems.orderID ) orderItems ON orderItems.orderID = orders.id LEFT JOIN ( SELECT payments.orderID, SUM(payments.amount) as totalPayments FROM payments GROUP BY payments.orderID ) payments ON payments.orderID = orders.id

    Read the article

  • Error querying database.

    - by user296516
    Hi, Once I have successfully connected to the database, i have a line that must insert values. mysqli_query($edb, "INSERT INTO elvis_table (name,email) VALUES ('$name','$email')" ) or die('Error querying database.'); It works fine on my computer ( xampp ), but once I upload it onto a server, it starts giving an error. Yes, I have a database with a corresponding table and fields on a server, it connects to the database fine, but gives an error on this line... Thanks!

    Read the article

  • Selecting a whole database over an individual table to output to file

    - by Daniel Wrigley
    :::::::: EDIT :::::::: New code for people to have a look at, one question I have with this is where do I set were the *.gz file is saved? $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; system($command); Also why the hell can you not reply yo your own post with answering it? :( :::::::: EDIT :::::::: Ok Im having trouble finding out how to select a full database for backup as an *.sql file rather than only an individual table. On the localhost I have several databases with one named "foo" and it is that which I want to backup and not any of the individual tables inside the database "foo". The code to connect to the database; //Database Information $dbhost = "localhost"; $dbname = "foo"; $dbuser = "bar"; $dbpass = "rulz"; //Connect to database mysql_connect ($dbhost, $dbuser, $dbpass) or die("Could not connect: ".mysql_error()); mysql_select_db($dbname) or die(mysql_error()); The code to backup the database; // Grab the time to know when this post was submitted $time = date('Y-m-d-H-i-s'); $tableName = 'foo'; $backupFile = '/sql/backup/'. $time .'.sql'; $query = "SELECT * INTO OUTFILE '". $backupFile ."' FROM ". $tableName .""; $result = mysql_query($query)or die("Database query died: " . mysql_error()); My brain is hurting near to the end of the day so no doubts i've missed something out very obvious. Thanks in advance to anyone helping me out.

    Read the article

  • In Django, how to create tables from an SQL file when syncdb is run

    - by Sidney
    Hi, How do I make syncdb execute SQL queries (for table creation) defined by me, rather then generating tables automatically. I'm looking for this solution as some particular models in my app represent SQL-table-views for a legacy-database table. So, I've created their SQL-views in my django-DB like this: CREATE VIEW legacy_series AS SELECT * FROM legacy.series; I have a reverse engineered model that represents the above view/legacytable. But whenever I run syncdb, I have to create all the views first by running sql scripts, otherwise syncdb simply creates tables for them (if a view is not found). How do I make syncdb run the above mentioned SQL?

    Read the article

  • How to setup testing LAMP environment to work with outsourcing companies?

    - by Kelvin
    Hello Guys, I need to setup testing LAMP environment in my office to work with outsourcing companies. This is what I think should be done on my side: Setup testing web server with the same configuration as on production Setup testing SQL server with "fake data"? Outsourcers should have access only to some part of original code Outsourcers should use CVS to update their code Once testing is finished someone releases the update ............ How would you separate original code and database from testing environment, but keep it as close as possible to production? What is the general practice for setting up testing environment and how other companies deal with outsourcers? I will appreciate for any of your thoughts and ideas from your personal experience. Maybe someone can suggest some article on this topic. Thank you a lot!

    Read the article

  • Hibernate constraint ConstraintViolationException. Is there an easy way to ignore duplicate entries?

    - by vincent
    Basically I've got the below schema and I'm inserting records if they don't exists. However when it comes to inserting a duplicate it throws and error as I would expect. My question is whether there is an easy way to make Hibernate to just ignore inserts which would in effect insert duplicates? CREATE TABLE IF NOT EXISTS `method` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(10) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; SEVERE: Duplicate entry 'GET' for key 'name' Exception in thread "pool-11-thread-4" org.hibernate.exception.ConstraintViolationException: could not insert:

    Read the article

  • Syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in PHP

    - by pmms
    mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $result=mysql_query("select * from jos_users INTO OUTFILE 'users.csv' FIELDS ESCAPED BY '""' TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' "); header("Content-type: text/plain"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Content-Transfer-Encoding: binary"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; in the above code in query string i.e string in side mysql_quey we are getting following error Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in C:\wamp\www\samples\mysql_excel\exel_outfile.php on line 8 in query string '\n' charter is not identifying as string thats why above error getting

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

  • PHP pagination class

    - by Lizard
    I am looking for a php pagination class, I have used a rather simple one in the past and it is no longer supported. I was wondering if anyone had any recommendations ? It seems pointless to build my own when there are probably so many good ones out there. Thanks in advnace!

    Read the article

  • Query Only Specified Number Of Items From Parent/Child Categories

    - by RogeR
    I'm having trouble figureing out how to query every item in a certain category and only list the newest 10 items by date. Here is my table layout: download_categories category_id (int) primary key title (var_char) parent_id (int) downloads id (int) primary key title (var_char) category_id (int) date (date) I need to query every file in a main category that lets say has 100 items and 5 child categories and only spit out the last 10 added. I have functions right now that just add up all the files so I can get a count by category, but I can't seem to modify the code to only display a certain amount of items based on the date.

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Storing unique values into an array and comparing against a loop - PHP

    - by Aphex22
    I'm writing a PHP report which is designed to be exported purely as a CSV file, using commma delimiters. There are three columns relating to product_id, these three columns are as follows: SKU Parent / Child Parent SKU 12345 parent 12345 12345_1 child 12345 12345_2 child 12345 12345_3 child 12345 12345_4 child 12345 18099 parent 18099 18099_1 child 18099 Here's a link to the full CSV file: http://i.imgur.com/XELufRd.png At the moment the code looks like this: $sql = "select * from product WHERE on_amazon = 'on' AND active = 'on'"; $result = mysql_query($sql) or die ( mysql_error() );?> <? // set headers echo " Type, SKU, Parent / Child, Parent SKU, Product name, Manufacturer name, Gender, Product_description, Product price, Discount price, Quantity, Category, Photo 1, Photo 2, Photo 3, Photo 4, Photo 5, Photo 6, Photo 7, Photo 8, Color id, Color name, Size name <br> "; // load all stock while ($line = mysql_fetch_assoc($result) ) { ?> <?php // Loop through each possible size variation to see whether any of the quantity column has stock > 0 $con_size = array (35,355,36,37,375,38,385,39,395,40,405,41,415,42,425,43,435,44,445,45,455,46,465,47,475,48,485); $arrlength=count($con_size); for($x=0;$x<$arrlength;$x++) { // check if size is available if($line['quantity_c_size_'.$con_size[$x].'_chain'] > 0 ) { ?> <? echo 'Shoes'; ?>, <?=$line['product_id']?>, , , <?=$line['title']?>, <? $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <? $gender = $line['category']; if ($gender == 'Mens') { echo 'H'; } else{ echo 'F'; } ?>, <?=preg_replace('/[^\da-z]/i', ' ', $line['amazon_desc']) ?>, <?=$line['price']?>, <?=$line['price']?>, <?=$line['quantity_c_size_'.$con_size[$x].'_chain']?>, <? $category = $line['style1']; switch ($category) { case "ankle-boots": echo "10013"; break; case "knee-high-boots": echo "10011"; break; case "high-heel-boots": echo "10033"; break; case "low-heel-boots": echo "10014"; break; case "wedge-boots": echo "10014"; break; case "western-boots": echo "10032"; break; case "flat-shoes": echo "10034"; break; case "high-heel-shoes": echo "10039"; break; case "low-heel-shoes": echo "10039"; break; case "wedge-shoes": echo "10035"; break; case "ballerina-shoes": echo "10008"; break; case "boat-shoes": echo "10018"; break; case "loafer-shoes": echo "10037"; break; case "work-shoes": echo "10039"; break; case "flat-sandals": echo "10041"; break; case "low-heel-sandals": echo "10042"; break; case "high-heel-sandals": echo "10042"; break; case "wedge-sandals": echo "10042"; break; case "mule-sandals": echo "10038"; break; case "mary-jane-shoes": echo "10039"; break; case "sports-shoes": echo "10026"; break; case "court-shoes": echo "10035"; break; case "peep-toe-shoes": echo "10035"; break; case "flat-boots": echo "10609"; break; case "mid-calf-boots": echo "10014"; break; case "trainer-shoes": echo "10009"; break; case "wellington-boots": echo "10012"; break; case "lace-up-boots": echo "10609"; break; case "chelsea-and-jodphur-boots": echo "10609"; break; case "desert-and-chukka-boots": echo "10032"; break; case "lace-up-shoes": echo "10034"; break; case "slip-on-shoes": echo "10043"; break; case "gibson-and-derby-shoes": echo "10039"; break; case "oxford-shoes": echo "10039"; break; case "brogue-shoes": echo "10039"; break; case "winter-boots": echo "10021"; break; case "slipper-shoes": echo "10016"; break; case "mid-heel-shoes": echo "10039"; break; case "sandals-and-beach-shoes": echo "10044"; break; case "mid-heel-sandals": echo "10042"; break; case "mid-heel-boots": echo "10014"; break; default: echo ""; } ?>, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_1.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_2.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_3.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_4.jpg, , , , , <? $colour = preg_replace('/[^\da-z]/i', ' ', $line['colour']); if( preg_match( '/white.*/i', $colour)) { echo '1'; } elseif( preg_match( '/yellow.*/i', $colour)) { echo '4'; } elseif( preg_match( '/orange.*/i', $colour)) { echo '7'; } elseif( preg_match( '/red.*/i', $colour)) { echo '8'; } elseif( preg_match( '/pink.*/i', $colour)) { echo '13'; } elseif( preg_match( '/purple.*/i', $colour)) { echo '15'; } elseif( preg_match( '/blue.*/i', $colour)) { echo '19'; } elseif( preg_match( '/green.*/i', $colour)) { echo '25'; } elseif( preg_match( '/brown.*/i', $colour)) { echo '28'; } elseif( preg_match( '/grey.*/i', $colour)) { echo '35'; } elseif( preg_match( '/black.*/i', $colour)) { echo '38'; } elseif( preg_match( '/gold.*/i', $colour)) { echo '41'; } elseif( preg_match( '/silver.*/i', $colour)) { echo '46'; } elseif( preg_match( '/multi.*/i', $colour)) { echo '594'; } elseif( preg_match( '/beige.*/i', $colour)) { echo '6887'; } elseif( preg_match( '/nude.*/i', $colour)) { echo '6887'; } else { echo '534'; } ?>, <?=$line['colour']?>, <?=$con_size[$x]?> <br> <? // finish checking if size is available } } ?> So at the moment this is simply echoing out the product_ID into the SKU column. The code would need to enter the product_id into an array and check whether it is unique. If the product_id is unique to the array, then the product_id is echoed out unaltered, and parent is echoed out to the 'Parent/Child' column and then the product_id is repeated to the 'Parent SKU' column. However, if the array is checked and the product_id already exists in the array, then the product_id is echoed out to the 'SKU' column with a suffix i.e. _1. Then child is echoed to the 'Parent / Child' column and the original parent product_id echoed to the 'Parent SKU' column. HOWEVER - the same SKU cannot be repeated with the same suffix i.e. 12345_1, 12345_1 - so presumably there would be to be another array for the suffixed SKUs to be checked against. If anybody could help, it would be great. Thanks --- UPDATE ANSWER --- I managed to solved this myself and thought I would share my solution for future reference. /* * Array to collect product_ids and check whether unique. * If unique product_id becomes parent SKU * If not product_id becomes child of previous parent and suffixed with _1, _2 etc... */ if (!in_array($line['product_id'], $SKU)) { $SKU[] = $line['product_id']; $parent = $line['product_id']; $a = 0; ?> <? echo 'Shoes'; ?>, <? echo $parent; ?>, <? echo "Parent"; ?>, <? echo $parent; ?>, <? } else { $child = $line['product_id'] . "_" . $a; ?> <? echo 'Shoes'; ?>, <? echo $child; ?>, <? echo "Child"; ?>, <? echo $child; <? // increment suffix value for child SKU $a++; ?>

    Read the article

  • how to create a dynamic sql statement w/ python and mysqldb

    - by Elias Bachaalany
    I have the following code: def sql_exec(self, sql_stmt, args = tuple()): """ Executes an SQL statement and returns a cursor. An SQL exception might be raised on error @return: SQL cursor object """ cursor = self.conn.cursor() if self.__debug_sql: try: print "sql_exec: " % (sql_stmt % args) except: print "sql_exec: " % sql_stmt cursor.execute(sql_stmt, args) return cursor def test(self, limit = 0): result = sql_exec(""" SELECT * FROM table """ + ("LIMIT %s" if limit else ""), (limit, )) while True: row = result.fetchone() if not row: break print row result.close() How can I nicely write test() so it works with or without 'limit' without having to write two queries?

    Read the article

  • Dynamically removing records when certain columns = 0; data cleansing

    - by cdburgess
    I have a simple card table: CREATE TABLE `users_individual_cards` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` char(36) NOT NULL, `individual_card_id` int(11) NOT NULL, `own` int(10) unsigned NOT NULL, `want` int(10) unsigned NOT NULL, `trade` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `user_id` (`user_id`,`individual_card_id`), KEY `user_id_2` (`user_id`), KEY `individual_card_id` (`individual_card_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1; I have ajax to add and remove the records based on OWN, WANT, and TRADE. However, if the user removes all of the OWN, WANT, and TRADE cards, they go to zero but it will leave the record in the database. I would prefer to have the record removed. Is checking after each "update" to see if all the columns = 0 the only way to do this? Or can I set a conditional trigger with something like: //psuedo sql AFTER update IF (OWN = 0, WANT = 0, TRADE = 0) DELETE What is the best way to do this? Can you help with the syntax?

    Read the article

  • (My)SQL performance: updating one field vs many unneccesary fields

    - by changokun
    i'm processing a form that has a lot of fields for a user who is editing an existing record. the user may have only changed one field, and i would typically do an update query that sets the values of all the fields, even though most of them don't change. i could do some sort of tracking to see which fields have actually changed, and only update the few that did. is there a performance difference between updating all fields in a record vs only the one that changed? are there other reasons to go with either method? the shotgun method is pretty easy...

    Read the article

  • Is there a way to optimize this update query?

    - by SchlaWiener
    I have a master table called "parent" and a related table called "childs" Now I run a query against the master table to update some values with the sum from the child table like this. UPDATE master m SET quantity1 = (SELECT SUM(quantity1) FROM childs c WHERE c.master_id = m.id), quantity2 = (SELECT SUM(quantity2) FROM childs c WHERE c.master_id = m.id), count = (SELECT COUNT(*) FROM childs c WHERE c.master_id = m.id) WHERE master_id = 666; Which works as expected but is not a good style because I basically make multiple SELECT querys on the same result. Is there a way to optimize that? (Making a query first and storing the values is not an option. I tried this: UPDATE master m SET (quantity1, quantity2, count) = ( SELECT SUM(quantity1), SUM(quantity2), COUNT(*) FROM childs c WHERE c.master_id = m.id ) WHERE master_id = 666; but that doesn't work.

    Read the article

  • Cloud Database Service Latency/Performance

    - by Gcoop
    Hi All, I am running a heavy traffic site and our server is beginning to get to its limits, at the moment the entire LAMP stack is on one box (not ideal). I would like to move the database onto it's own box or onto a cloud service, but from my previous experience moving the database off the same box as the webserver increases the latency of reads quite dramatically slowing down the site. Is using a cloud service for this going to overcome this problem, because as far as I can tell its essentially the same situation (as moving it onto a separate box in my control)? In which case why is there so much popularity around cloud based database services at the moment? Are cloud based database services so quick that the latency of reads is so low that its almost like having it on the same box in the same datacentre?

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >