Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 431/1624 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • how to store data with many categories and many properties efficiently?

    - by Mickey Shine
    We have a large number of data in many categories with many properties, e.g. category 1: Book properties: BookID, BookName, BookType, BookAuthor, BookPrice category 2: Fruit properties: FruitID, FruitName, FruitShape, FruitColor, FruitPrice We have many categories like book and fruit. Obviously we can create many tables for them (MySQL e.g.), and each category a table. But this will have to create too many tables and we have to write many "adapters" to unify manipulating data. The difficulties are: 1) Every category has different properties and this results in a different data structure. 2) The properties of every categoriy may have to be changed at anytime. 3) Hard to manipulate data if each category a table (too many tables) How do you store such kind of data?

    Read the article

  • filter queryset based on list, including None

    - by jujule
    Hi all I dont know if its a django bug or a feature but i have a strange ORM behaviour with MySQL. class Status(models.Model): name = models.CharField(max_length = 50) class Article(models.Model) status = models.ForeignKey(status, blank = True, null=True) filters = Q(status__in =[0, 1,2] ) | Q(status=None) items = Article.objects.filter(filters) this returns Article items but some have other status than requested [0,1,2,None] looking at the sql query : SELECT [..] FROM `app_article` LEFT OUTER JOIN `app_status` ON (`app_article`.`status_id` = `app_status`.`id`) WHERE (`app_article`.`status_id` IN (1, 2) OR `app_status`.`id` IS NULL) ORDER BY [...] the OR app_status.id IS NULL part seems to be the cause. if i change it to OR app_article.status_id IS NULL it works correctly. How to deal with this ? Thanx.

    Read the article

  • laravel multiple where clauses within a loop

    - by user1424508
    Pretty much I want the query to select all records of users that are 25 years old AND are either between 150-170cm OR 190-200cm. I have this query written down below. However the problem is it keeps getting 25 year olds OR people who are 190-200cm instead of 25 year olds that are 150-170 OR 25 year olds that 190-200cm tall. How can I fix this? thanks $heightarray=array(array(150,170),array(190,200)); $user->where('age',25); for($i=0;$i<count($heightarray);i++){ if($i==0){ $user->whereBetween('height',$heightarray[$i]) }else{ $user->orWhereBetween('height',$heightarray[$i]) } } $user->get(); Edit: I tried advanced wheres (http://laravel.com/docs/queries#advanced-wheres) and it doesn't work for me as I cannot pass the $heightarray parameter into the closure. from laravel documentation DB::table('users') ->where('name', '=', 'John') ->orWhere(function($query) { $query->where('votes', '>', 100) ->where('title', '<>', 'Admin'); }) ->get();

    Read the article

  • How do I split the output from mysqldump into smaller files?

    - by lindelof
    I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB. Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server. Or is there another solution I have missed? Edit The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.

    Read the article

  • Look up values in a BDB for several files in parallel

    - by biznez
    What is the most efficient way to look up values in a BDB for several files in parallel? If I had a Perl script which did this for one file at a time, would forking/running the process in background with the ampersand in Linux work? How might Hadoop be used to solve this problem? Would threading be another solution?

    Read the article

  • Ruby on Rails: Best way to save search queries in a database

    - by Adam Templeton
    For a RoR app I'm helping develop, I need to save all search queries in a database so I can analyze them later. My plan right now is to create a Result model and table, and just save each search query's text in that table, along with a user's ID, the time, etc. However, the app has about 15,000 users, so I'm afraid the single table approach won't be super efficient when it comes time to parse that data. (The database is setup via MySQL, if that factors in at all.) Am I just being paranoid? Is there a Ruby gem that handles this sort of thing, or a better approach I could take? Any input would be appreciated.

    Read the article

  • Check if exists, if so, update by 1++, if not insert

    - by Scarface
    Hey guys quick question, I currently have an insert statement $query= "INSERT into new_mail VALUES ('$to1', '0')"; where fields are username, and message_number Currently what I would do to check if the entry exists, is do a select query then check the number of rows with mysql_num_rows (php). If rows==1 then I get the current message_number and set it equal to $row['message_number']++1. Then I update that entry with another query. Is there an easier way to do all this in just mysql with just one query (check if exists, if not insert, if so update message_number, increase by 1)?

    Read the article

  • need export query rather than creating file for mysqldump without triggers.. See description

    - by OM The Eternity
    I have code as $db_name = "db"; $outputfile = "/somewhere"; $new_db_name = 'newdb'; $cmd = 'mysqldump --skip-triggers %s > %s 2>&1'; $cmd = sprintf($cmd, escapeshellarg($db_name), escapeshellcmd($output_file)); exec($cmd, $output, $ret); if ($ret !=0 ) { //log error message in $output } Then to import: $cmd = 'mysql --database=%s < %s 2>&1'; $cmd = sprintf($cmd, escapeshellarg($new_db_name), escapeshellcmd($output_file)); exec($cmd, $output, $ret); //etc. unlink($outputfile); Bow here what should i do to get the export query, rather than creating a file everytime?

    Read the article

  • Embedded analog of CouchDB, same as sqlite for SQL Server

    - by Mike Chaliy
    I like an idea of document oriented databases like CouchDB. I am looking for simple analog. My requirements is just: persistance storage for schema less data; some simple in-proc quering; good to have transactions and versioning; ruby API; map/reduce is aslo good to have; should work on shared hosting What I do not need is REST/HTTP interfaces (I will use it in-proc). Also I do not need all scalability stuff.

    Read the article

  • How to do interactive with the prompt this way in PHP?

    - by user198729
    C:\>mysql -uroot -padmin Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.0.37-community-nt-log MySQL Community Edition (GPL) Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show databases -> ; +--------------------+ | Database | +--------------------+ | information_schema | | attendance | | fusionchartsdb | | mysql | | sugarcrm | | test | +--------------------+ 6 rows in set (0.09 sec) mysql> use mysql; Database changed mysql> So I want to start a process by mysql -uroot -padmin,and in that process I want to run these statements: show databases use mysql insert xxx (..) values (..) How to do it in PHP?

    Read the article

  • Writing a simple incrementer counter in rails

    - by Trip
    For every Card, I would like to attach a special number to them that increments by one. I assume I can do this all in the controller. def create @card = Card.new(params[:card]) @card.SpecNum = @card.SpecNum ++ ... end Or. I can be blatantly retarded. And maybe the best bet is to add an auto-incremement table to mysql. The problem is the number has to start at a specific number, 1020. Any ideas?

    Read the article

  • How to connect these together?

    - by Biertago
    I've got a mysql database created by phpMyAdmin and I want to use it in my Qt project. I tried it on Visual Studio 2010 with an qt addon but it didn't work neither. In Qt Creator, I add: QT += sql in a .pro file and include: #include <QSqlDatabase> in the main file but there's a driver error. I don't know even where to start and each google page shows something different. I tried to look for some guide but there is nothing which concerns everything in []s.

    Read the article

  • How to store coordinates in a database

    - by Tim
    Hello all! I have a Flex GUI where I have to place quadrate elements. The position of these elements need to be stored into a database. So I can create two integer fields in the db table x and y. Also I need an angle, because the user can rotate these elements, so I can also make a int (int is okay, I do not need a double value therefore). As a ORM, I use Hibernate. But the question is, if creating three integer fields is the best way to handle this. Perhaps someone can tell me if this will be okay or if there are better ways? Thanks a lot in advance & Best Regards.

    Read the article

  • Updating counters through Hibernate

    - by at
    This is an extremely common situation, so I'm expecting a good solution. Basically we need to update counters in our tables. As an example a web page visit: Web_Page -------- Id Url Visit_Count So in hibernate, we might have this code: webPage.setVisitCount(webPage.getVisitCount()+1); The problem there is reads in mysql by default don't pay attention to transactions. So a highly trafficked webpage will have inaccurate counts. The way I'm used to doing this type of thing is simply call: update Web_Page set Visit_Count=Visit_Count+1 where Id=12345; I guess my question is, how do I do that in Hibernate? And secondly, how can I do an update like this in Hibernate which is a bit more complex? update Web_Page wp set wp.Visit_Count=(select stats.Visits from Statistics stats where stats.Web_Page_Id=wp.Id) + 1 where Id=12345;

    Read the article

  • AppEngine: Can I write a Dynamic property (db.Expando) with a name chosen at runtime?

    - by MarcoB
    If I have an entity derived from db.Expando I can write Dynamic property by just assigning a value to a new property, e.g. "y" in this example: class MyEntity(db.Expando): x = db.IntegerProperty() my_entity = MyEntity(x=1) my_entity.y = 2 But suppose I have the name of the dynamic property in a variable... how can I (1) read and write to it, and (2) check if the Dynamic variable exists in the entity's instance? e.g. class MyEntity(db.Expando): x = db.IntegerProperty() my_entity = MyEntity(x=1) # choose a var name: var_name = "z" # assign a value to the Dynamic variable whose name is in var_name: my_entity.property_by_name[var_name] = 2 # also, check if such a property esists if my_entity.property_exists(var_name): # read the value of the Dynamic property whose name is in var_name print my_entity.property_by_name[var_name] Thanks...

    Read the article

  • Simple PHP query question: LIKE

    - by pg
    When I replace $ordering = "apples, bananas, cranberries, grapes"; with $ordering = "apples, bananas, grapes"; I no longer want cranberries to be returned by my query, which I've written out like this: $query = "SELECT * from dbname where FruitName LIKE '$ordering'"; Of Course this doesn't work, because I used LIKE wrong. I've read through various manuals that describe how to use LIKE and it doesn't quite make sense to me. If I change the end of the db to "LIKE "apples"" that works for limiting it to just apples. Do I have to explode the ordering on the ", " or is there a way to do this in the query?

    Read the article

  • limiting the rate of emails using python

    - by Ali
    I have a python script which reads email addresses from a database for a particular date, example today, and sends out an email message to them one by one. It reads data from MySQL using the MySQLdb module and stores all results in a dictionary and sends out emails using : rows = cursor.fetchall () #All email addresses returned that are supposed to go out on todays date. for row is rows: #send email However, my hosting service only lets me send out 500 emails per hour. How can I limit my script from making sure only 500 emails are sent in an hour and then to check the database if more emails are left for today or not and then to send them in the next hour. The script is activated using a cron job.

    Read the article

  • Wordpress SQL_CALC fix causes PHP error

    - by ok1ha
    I'm looking for some followup on an older topic for Wordpress where SQL_CALC was found to slow things down when you have a large DB in Wordpress. I have been using the code, at the bottom of this post, to get around it but it does generate an error in my error log. How would I prevent this error? PHP Warning: Division by zero in /var/www/vhosts/domain.com/httpdocs/wp-content/themes/greatTheme/functions.php on line 19 The original thread: http://wordpress.org/support/topic/slow-queries-sql_calc_found_rows-bringing-down-site?replies=25 The code in my functions.php: add_filter('pre_get_posts', 'optimized_get_posts', 100); function optimized_get_posts() { global $wp_query, $wpdb; $wp_query->query_vars['no_found_rows'] = 1; $wp_query->found_posts = $wpdb->get_var( "SELECT COUNT(*) FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private')" ); $wp_query->found_posts = apply_filters_ref_array( 'found_posts', array( $wp_query->found_posts, &$wp_query ) ); $wp_query->max_num_pages = ceil($wp_query->found_posts / $wp_query->query_vars['posts_per_page']); return $wp_query; }

    Read the article

  • delete all records except the id I have in a python list

    - by jay_t
    Hi all, I want to delete all records in a mysql db except the record id's I have in a list. The length of that list can vary and could easily contain 2000+ id's, ... Currently I convert my list to a string so it fits in something like this: cursor.execute("""delete from table where id not in (%s)""",(list)) Which doesn't feel right and I have no idea how long list is allowed to be, .... What's the most efficient way of doing this from python? Altering the structure of table with an extra field to mark/unmark records for deletion would be great but not an option. Having a dedicated table storing the id's would indeed be helpful then this can just be done through a sql query... but I would really like to avoid these options if possible. Thanks,

    Read the article

  • How to get only one record for each duplicate rows of the id in oracle?

    - by Psychocryo
    suppose i have this table: group_id | image | image_id | ----------------------------- 23 blob 1 23 blob 2 23 blob 3 21 blob 4 21 blob 5 25 blob 6 25 blob 7 how to get results of only 1 of each group id? in this case,there may be multiple images for one group id, i just want one result of each group_id i tried distinct but i will only get group_id. max for image also would not work.

    Read the article

  • HOw do I delete record in a table by keeping certain datas??

    - by mathew
    my site has lots of incoming searches which is stored in a database to show recent queries into my website. due to high search queries my database is getting bigger in size. so what I want is I need to keep only recent queries in database say 10 records. this keeps my database small and queries will be faster. I am able to store incoming queries to database but don't know how to restrict or delete excess/old data from table. any help?? well I am using PHP and MySQL

    Read the article

  • Output custom messages based on row value

    - by decimal
    I've made a simple guestbook mysql/php page. An entry is displayed if the approve column has a value of 1. For the adminstrator, I want to display either "message approved" or "not approved". Here's my code: while ($row = mysql_fetch_array ($r)) { print "<p>Guest:" .$row['name']. "</p> <p>Date:" .$row['date']. "</p> <p>Comment:". $row['comment']. "</p>"; if ($row['approve'] = '1') { print '<p>YES, the message has been approved</p>'; } else { print '<p>NO, it hasn\'t been approved</p>'; } Whatever value the if statement checks approve is equal to, all approve values are output as that value.

    Read the article

  • Getting all parent rows in one SQL query

    - by Alex
    I have a simple MySQL table thats contains a list of categories, level is determined by parent_id: id name parent_id --------------------------- 1 Home 0 2 About 1 3 Contact 1 4 Legal 2 5 Privacy 4 6 Products 1 7 Support 1 I'm attempting to make a breadcrumb trail. So i have the 'id' of the child, I want to get all available parents (iterating up the chain until we reach 0 "Home"). There could be any number or child rows going to an unlimited depth. Currently I am using an SQL call for each parent, this is messy. Is there a way in SQL to do this all on one query?

    Read the article

  • Empty files generated from running `mysqldump` using PHP

    - by alex
    I keep getting empty files generated from running $command = 'mysqldump --opt -h localhost -u username -p \'password\' dbname > \'backup 2009-04-15 09-57-13.sql\''; command($command); Anyone know what might be causing this? My password has strange characters in it, but works fine with connecting to the db. I've ran exec($command, $return) and outputted the $return array and it is finding the command. I've also ran it with mysqldump > file.sql and the file contains Usage: mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS] For more options, use mysqldump --help So it would seem like the command is working.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >