Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 260/1624 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • What should a Django user know when moving from MySQL to PostgreSQL?

    - by tmitchell
    Most of my experience with Django thus far has been with MySQL and mysqldb. For a new app I'm writing, I'm dipping my toe in the PostgreSQL water, now that I have seen the light. While writing a data import script, I stumbled upon an issue with the default autocommit behavior. I would guess there are other "gotchas" that might crop up. What else should I be on the lookout for?

    Read the article

  • PHP next MySQL row - how to move pointer until function checks true?

    - by ropbhardgood
    I have a PHP script which takes a value from a row in my MySQL database, runs it through a function, and if it determines it's true returns one value, and if it's false, it needs to go to the next value in the database and check that one until eventually one returns true. I think I need to use mysql_fetch_assoc, but I'm not really sure in what way to use it... I wish I could post my code to be more specific, but it's a lot of code and most of it has no bearing on this issue...

    Read the article

  • Using the erlang mysql module, how is a database connection closed?

    - by Dan
    In using the erlang mysql module the exposed external functions are: %% External exports -export([start_link/5, start_link/6, start_link/7, start_link/8, start/5, start/6, start/7, start/8, connect/7, connect/8, connect/9, fetch/1, fetch/2, fetch/3, prepare/2, execute/1, execute/2, execute/3, execute/4, unprepare/1, get_prepared/1, get_prepared/2, transaction/2, transaction/3, get_result_field_info/1, get_result_rows/1, get_result_affected_rows/1, get_result_reason/1, encode/1, encode/2, asciz_binary/2 ]). From the this this, it is not apparent how to close a connection. How a connection closed?

    Read the article

  • Select box is not working properly after including google custom search box in web page

    - by Vinay
    I have got a select box and google custom search box in a page, when i choose a option from select box and navigate away from the page and again if i come back to the same page the option will not be selected (violates the default functionality of select box), The code is below <script src="https://www.google.com/jsapi" type="text/javascript"></script> <script type="text/javascript"> google.load('search', '1', {language : 'en'}); google.setOnLoadCallback(function() { var customSearchControl = new google.search.CustomSearchControl('004920913350056953771:kpkclvhujzk'); customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET); var options = new google.search.DrawOptions(); options.setAutoComplete(true); options.enableSearchboxOnly("<?=$homeurl?>my_results.php", "query"); customSearchControl.draw('cse-search-form', options); }, true); </script> <select multiple="yes"> <option>1</option> <option>2</option> </select> If i remove the custom search script, the select box selected option will be retained even after navigating away from the page. (default functionality) Its working fine in chrome, IE but not in Firefox. Is there any solution for this, so that select box must work fine even in the presence of search box, but the order must be same 1) Search box 2)Select Box

    Read the article

  • MySQLdb not INSERTING, _mysql does fine.

    - by Mad_Casual
    Okay, I log onto the MySQL command-line client as root. I then open or otherwise run a python app using the MySQLdb module as root. When I check the results using python (IDLE), everything looks fine. When I use the MySQL command-line client, no INSERT has occurred. If I change things around to _mysql instead of MySQLdb, everything works fine. I'd appreciate any clarification(s). "Works" until IDLE/Virtual machine is reset: <pre><code>import MySQLdb db = MySQLdb.connect(user='root', passwd='*******',db='test') cursor = db.cursor() cursor.execute("""INSERT INTO test VALUES ('somevalue');""",)</code></end> Works: <pre><code>import _mysql db = _mysql.connect(user='root', passwd='*******',db='test') db.query("INSERT INTO test VALUES ('somevalue');")</code></end> System info: Intel x86 WinXP Python 2.5 MySQL 5.1.41 MySQL-Python 1.2.2

    Read the article

  • What method should be used for searching this mysql dataset?

    - by GeoffreyF67
    I've got a mysql dataset that contains 86 million rows. I need to have a relatively fast search through this data. The data I'll be searching through is all strings. I also need to do partial matches. Now, if I have 'foobar' and search for '%oob%' I know it'll be really slow - it has to look at every row to see if there is a match. What methods can be used to speed queries like this up? G-Man

    Read the article

  • Translating IPTables rule to UFW

    - by Dario Fumagalli
    we are using an Ubuntu 12.04 x64 LTS VPS. Firewall being used is UFW. I have setup a Varnish + LEMP setup. along with other things, including an Openswan IPSEC VPN from our office to the VPS data center. A second in house Ubuntu box is to act as MySQL slave and fetch data from the VPS through the VPN. Master's ppp0 is seen as 10.1.2.1 from the slave, they ping etc. I have done the various required tasks but I can't get the client (slave) MySQL (nor telnet 10.1.2.1 3306) to access the master through the VPN unless I issue this fairly obvious IPTables command: iptables -A INPUT -s 10.1.2.0/24 -p tcp --dport 3306 -j ACCEPT I willingly forced the accepted input to come from the last octet. With this rule everything works just fine! However I want to translate this command to UFW syntax so to keep everything in one place. Now I admit being inexperienced with UFW, I prepared rules like: ufw allow proto tcp from 10.1.2.0/24 port mysql and 2-3 variations involving specifying 3306 instead of mysql, specifying a target IP (MySQL's my.cnf at the moment is configured as 0.0.0.0) and similar but I just don't seem to be able to replicate the simple iptables rule in a functional way. Anyone could kindly give me a suggestion that is not to dump UFW? Thanks in advance.

    Read the article

  • How to Load Data from Text File in MySQL?

    - by Taz
    I have this file availableHere Fields are separated by space and newline starts at new line. I am using this command to load data mysql> LOAD DATA LOCAL INFILE 'C:\\Documents and Settings\\Scan\\My Documents\\Downloads\\images_en.nt\\Sample4.txt' INTO TABLE NER.Images FIELDS TERMINATED BY' ' LINES TERMINATED BY '\n';' Only one row is loaded and if I execute again No row is loaded. Where is the problem? Data can be modified to reformat if required.

    Read the article

  • Data from a table in 1 DB needed for filter in different DB...

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID]

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • SQL Server 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >