Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 75/443 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • What is the best way to store categorical references in SQL tables?

    - by jlafay
    I'm wanting to store a wide array of categorical data in MySQL database tables. Let's say that for instance I want to to information on "widgets" and want to categorize attributes in certain ways, i.e. shape category. For instance, the widgets could be classified as: round, square, triangular, spherical, etc. Should these categories be stored within a table to reference them best from an application? Another possibility, I would imagine, would be to add a column to widgets that contained a shape column that contained a tiny int. That way my application could search shapes by that and then use a coordinating enum type that would map the shape int meanings. Which would be best? Or is there another solution that I'm not thinking of yet?

    Read the article

  • Can we use union of two sqlite databases with same tables for Core Data?

    - by Tofrizer
    Hi All, I have an iPhone Core Data app with a pre-populated sqlite "baseline" database. Can I add a second smaller sqlite database with the same tables as my pre-populated "baseline" database but with additional / complementary data such that Core Data will happily union the data from both databases and, ultimately, present to me as if it was all a single data source? Idea that I had is: 1) the "baseline" database never changes. 2) I can download the smaller "complementary" sqlite database for additional data as and when I need to (I'm assuming downloading sqlite database is allowed, please comment if otherwise). 3) Core Data is then able to union data from 1 & 2. I can then reference this unified data by calling my defined Core Data managed object model. Hope this makes sense. Thanks in advance.

    Read the article

  • Is foreign key reference from two different primary key from two different tables valid?

    - by arundex
    I have a foreign key that has to refer primary keys of two different tables. Table 1: animal animal_ id (primary key) Table 2: bird bird_ id (primary key) Table 3: Pet_info pet_id, type ENUM ('bird', 'animal') foreign key (pet_ id) references animal(animal_id), bird(bird_id) So, I need to check for pet_id either from animal or bird table depending on the need. Is this valid? Or should I go for some restructuring . . . NOTE: I referred this . . but I'm not sure whether I have to change my existing design

    Read the article

  • On Google AppEngine what is the best way to merge two tables?

    - by gpjones
    If I have two tables, Company and Sales, and I want to display both sets of data in a single list, how would I do this on Google App Engine using GQL? The models are: class Company(db.Model): companyname = db.StringProperty() companyid = db.StringProperty() salesperson = db.StringProperty() class Sales(db.Model): companyid = db.StringProperty() weeklysales = db.StringProperty() monthlysales = db.StringProperty() The views are: def company(request): companys = db.GqlQuery("SELECT * FROM Company") sales = db.GqlQuery("SELECT * FROM Sales") template_values = { 'companys' : companys, 'sales' : sales } return respond(request, 'list', template_values) List html includes: {%for company in companys%} {% for sale in sales %} {% ifequal company.companyid sales.companyid %} {{sales.weeklysales}} {{sales.monthlysales}} {% endifequal %} {% endfor %} {{company.companyname}} {{company.companyid}} {{company.salesperson}} {%endfor%} Any help would be greatly appreciated.

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • Relation to multiple tables of different types for rating?

    - by Tronic
    i have a table structure like this Products Team Images and want to implement a rating/commenting-feature, where users can rate each entry of all tables. what's the best way to make a single rating table? e.g. a user votes a a product and a team entry, and it should be possible to get alle these entries from a single table. what kind of table-structure is best for this purpose? i hope, my questions is clear enough :/ thanks in advance!

    Read the article

  • select a database and in that select tables. using c# . use web config

    - by syedsaleemss
    Im using c# .net windows form application. I have many databases created using sql server Management studio 2005. Each database has several tables. i have a button, when clicked should allow me to select a database among several databases and in that database i want to select a single table. Later i need to display the contents of the selected table into a datagrid view.I came to know that it can be done using Webconfig. How can i acheive this? It goes like this a) select a database b) In that database select a table c) display the contents into a datagridview.

    Read the article

  • Best way to move a bunch of SQL Server 2005 tables to another Server?

    - by Mikecancook
    I've been looking for a way to move a bunch of tables, more than 40, to another server with all the data in them. I've looked around for scripts to generate inserts but so far I'd have to run them once for every table, then copy all the scripts over and then run them on the server. Seems like there is a better way. --Update-- My strategy for doing this may have been for naught. The end script, using MS SQL Server Publishing Wizard and Red Gates SQL Data Compare (excellent tool, btw) results in a file over 1GB. This makes my system plead for mercy and I'm not willing to risk crashing a clients server just opening the file to run it. I may have to rethink this whole thing and break down to just individual per table scripts. I'm not looking forward to that.

    Read the article

  • I am having common field in ten tables with different field name. How should i get that using mysql

    - by Fero
    Hi all, I am having a common field in ten tables with different field name. example: table1: t1_id     t1_location 1         india 2         china 3         america table2: t2_id     t2_location 4         london 5         australia 6         america Now my o/p should be: location india china america london australia How should i get that using mysql query. thanks in advance

    Read the article

  • SQL Server, fetching data from multiple joined tables. Why is slow?

    - by user562192
    I have problem with performance when retrieving data from SQL Server. My sql query looks something like this: SELECT table_1.id, table_1.value, table_2.id, table_2.value,..., table_20.id, table_20.value From table_1 INNER JOIN table_2 ON table_1.id = table_2.table_1_id INNER JOIN table_3 ON table_2.id = table_3.table_2_id... WHERE table_1.row_number BETWEEN 1 AND 20 So, I am fetching 20 results. This query takes about 5 seconds to execute. When I select only table_1.id, it returns results instantly. Because of that, I guess that problem is not in JOINs, it is in retrieving data from multiple tables. Any suggestions how I would speed up this query?

    Read the article

  • How to efficiently convert DataSet.Tables to List<DataTable>?

    - by Soenhay
    I see many posts about converting the table(s) in a DataSet to a list of DataRows or other row data but I was unable to find anything about this question. This is what I came up with using in .Net 3.0: internal static List<DataTable> DataSetToList(DataSet ds) { List<DataTable> result = new List<DataTable>(); foreach (DataTable dtbl in ds.Tables) { result.Add(dtbl); } return result; } Is there a better way, excluding an extension method? Thanks

    Read the article

  • Use SQL to clone data in two tables that have a 1-1 relationship with each other

    - by AmoebaMan17
    Using MS SQL 2005, Table 1 ID | T1Value | T2ID | GroupID ---------------------------------- 1 | a | 10 | 1 2 | b | 11 | 1 3 | c | 12 | 1 4 | a | 22 | 2 Table 2 ID | T2Value ---------------- 10 | H 11 | J 12 | K 22 | H I want to clone the data for GroupID == 1 into a new GroupID so that I result with the following: Table 1 ID | T1Value | T2ID | GroupID ---------------------------------- 1 | a | 10 | 1 2 | b | 11 | 1 3 | c | 12 | 1 4 | a | 22 | 2 5 | a | 23 | 3 6 | b | 24 | 3 7 | c | 25 | 3 Table 2 ID | T2Value ---------------- 10 | H 11 | J 12 | K 22 | H 23 | H 24 | J 25 | K I've found some SQL clone patterns that allow me to clone data in the same table well... but as I start to deal with cloning data in two tables at the same time and then linking up the new rows correctly... that's just not something I feel like I have a good grasp of. I thought I could do some self-joins to deal with this, but I am worried in the cases where the non-key fields have the same data in multiple rows.

    Read the article

  • MySQL Query to receive random combinations from two tables.

    - by Michael
    Alright, here is my issue, I have two tables, one named firstnames and the other named lastnames. What I am trying to do here is to find 100 of the possible combinations from these names for test data. The firstnames table has 5494 entries in a single column, and the lastnames table has 88799 entries in a single column. The only query that I have been able to come up with that has some results is: select * from (select * from firstnames order by rand()) f LEFT JOIN (select * from lastnames order by rand()) l on 1=1 limit 10; The problem with this code is that it selects 1 firstname and gives every lastname that could go with it. While this is plausible, I will have to set the limit to 500000000 in order to get all the combinations possible without having only 20 first names(and I'd rather not kill my server). However, I only need 100 random generations of entries for test data, and I will not be able to get that with this code. Can anyone please give me any advice?

    Read the article

  • How do I switch the table that is queried with linq-to-sql

    - by Ian Ringrose
    We have two tables with the same set of columns; depending on the “type” of object the value is stored in one of the two tables. I wish to use common code to access these two tables. If I was using “raw sql” I could just use String.Format() to change the table name. (Likewise for updates etc) The two separate tables are needed as the data access patterns are very different for the common queries on the two tables and therefore different indexes are needed. “Views” and “instead of triggers” etc to make the tables look like a single table are not liked here. A lot of our customers use low end version of SqlServer so we cannot use partition tables.

    Read the article

  • How can I do a left outer join where both tables have a where clause?

    - by cdeszaq
    Here's the scenario: I have 2 tables: CREATE TABLE dbo.API_User ( id int NOT NULL, name nvarchar(255) NOT NULL, authorization_key varchar(255) NOT NULL, is_active bit NOT NULL ) ON [PRIMARY] CREATE TABLE dbo.Single_Sign_On_User ( id int NOT NULL IDENTITY (1, 1), API_User_id int NOT NULL, external_id varchar(255) NOT NULL, user_id int NULL ) ON [PRIMARY] What I am trying to return is the following: is_active for a given authorization_key The Single_Sign_On_User.id that matches the external_id/API_User_id pair if it exists or NULL if there is no such pair When I try this query: SELECT Single_Sign_On_User.id, API_User.is_active FROM API_User LEFT OUTER JOIN Single_Sign_On_User ON Single_Sign_On_User.API_User_id = API_User.id WHERE Single_Sign_On_User.external_id = 'test_ext_id' AND API_User.authorization_key = 'test' where the "test" API_User record exists but the "test_ext_id" record does not, and with no other values in either table, I get no records returned. When I use: SELECT Single_Sign_On_User.id, API_User.is_active FROM API_User LEFT OUTER JOIN Single_Sign_On_User ON Single_Sign_On_User.API_User_id = API_User.id WHERE API_User.authorization_key = 'test' I get the results I expect (NULL, 1), but that query doesn't allow me to find the "test_ext_id" record if it exists but would give me all records associated with the "test" API_User record. How can I get the results I am after?

    Read the article

  • MySQL: How to separate a name field in one table into firstname / lastname in two separate tables?

    - by Eileen
    I have a drupal database where the node table is full of profiles. The field node.title is "Firstname Lastname". I want to separate the names so that node.title = "Firstname", and over in another table entirely, content_type_profile.field_lastname_value = "Lastname". The entries in the two tables can be joined on the field nid. I'd love to run a SQL command to do this, and I am fine with taking the naive approach that the first word is the first name, and everything else in the field is last name -- it will mean a few manual corrections down the line, but that's much better than doing it all by hand in the first place. (I read this question and surely the answer lies in there but I am not that SQL-savvy and am not sure how to make it work for my database.) Thanks!

    Read the article

  • Entity diagram with tables that have foreign keys that point to a non-PK column do not show relation

    - by Jason Coyne
    I have two tables parent and child. If I make a foreign key on child that points to the primary key of parent, and then make an entity diagram, the relationship is shown correctly. If I make the foreign key point to a different column, the relationship is not shown. I have tried adding indexes to the column, but it does not have an effect. The database is sqlite, but I am not sure if that has an effect since its all hidden behind ADO.net. How do I get the relationship to work correctly?

    Read the article

  • how to select all the data from many tables?

    - by Syom
    how to select all the data from many tables? i try `"SELECT * FROM `table1`, `table2`"` , but result none understandable for me. it returns only some rows from table1, and 3 times all the data from table2. i've red one same question here, but don't understand the answer. so could you help me? thanks in advance. update: when i try (SELECT * FROM `videos`) UNION (SELECT * FROM `users`) it returns #1222 - The used SELECT statements have a different number of columns

    Read the article

  • Use SQL to clone data in two tables that have a 1-1 relationship in each table

    - by AmoebaMan17
    Using MS SQL 2005, Table 1 ID | T1Value | T2ID | GroupID ---------------------------------- 1 | a | 10 | 1 2 | b | 11 | 1 3 | c | 12 | 1 4 | a | 22 | 2 Table 2 ID | T2Value ---------------- 10 | H 11 | J 12 | K 22 | H I want to clone the data for GroupID == 1 into a new GroupID so that I result with the following: Table 1 ID | T1Value | T2ID | GroupID ---------------------------------- 1 | a | 10 | 1 2 | b | 11 | 1 3 | c | 12 | 1 4 | a | 22 | 2 5 | a | 23 | 3 6 | b | 24 | 3 7 | c | 25 | 3 Table 2 ID | T2Value ---------------- 10 | H 11 | J 12 | K 22 | H 23 | H 24 | J 25 | K I've found some SQL clone patterns that allow me to clone data in the same table well... but as I start to deal with cloning data in two tables at the same time and then linking up the new rows correctly... that's just not something I feel like I have a good grasp of. I thought I could do some self-joins to deal with this, but I am worried in the cases where the non-key fields have the same data in multiple rows.

    Read the article

  • how can i query a table that got split to 2 smaller tables? Union? view ?

    - by danfromisrael
    hello friends, I have a very big table (nearly 2,000,000 records) that got split to 2 smaller tables. one table contains only records from last week and the other contains all the rest (which is a lot...) now i got some Stored Procedures / Functions that used to query the big table before it got split. i still need them to query the union of both tables, however it seems that creating a View which uses the union statement between the two tables lasts forever... that's my view: CREATE VIEW `united_tables_view` AS select * from table1 union select * from table2; and then i'd like to switch everywhere the Stored procedure select from 'oldBigTable' to select from 'united_tables_view'... i've tried adding indexes to make the time shorter but nothing helps... any Ideas? PS the view and union are my idea but any other creative idea would be perfect! bring it on! thanks!

    Read the article

  • What is the best practise for relational database tables in mysql?

    - by George
    Hi, I know, there is a lot of info on mysql out there. But I was not really able to find an answer to this specific and actually simple question: Let's say I have two tables: USERS (with many fields, e.g. name, street, email, etc.) and GROUPS (also with many fields) The relation is (I guess?) 1:n, that is ONE user can be a member of MANY groups. What I dis, is create another table, named USERS_GROUPS_REL. This table has only two fields: us_id (unique key of table USERS) and gr_id (unique key of table GROUPS) In PHP I do a query with join. Is this "best practice" or is there a better way? Thankful for any hint!

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • Excel Data Organization: Array Formulas? Tables? Named Range?

    - by Joe Arasin
    I'm trying to make a huge Excel sheet reasonably maintainable, but it's huge in the "hundred-table-db" direction, rather than the "hundred-thousand-row-table" direction. I want to have a baseline data table that looks something like this: | Indicator | Units | 2010 | 2015 | 2020 | 2025 | Source | | GDP | $Gazillion | 300 | 350 | 400 | 450 | BLS | | Population | Millions | 350 | 400 | 450 | 500 | Census | | PetMonkeyPopulation | Thousands | 50 | 60 | 70 | 80 | SimiansRUs | And then be able to have another sheet that looks like: | | 2010 | 2015 | 2020 | 2025 | | MonkeysPerCapita | .1 | .2 | .3 | .4 | | MonkeysPerDollar | .01 | .01 | .01 | .01 | | GDPPerCapita | 300 | 400 | 450 | 600 | Is there some standard way to make this kind of thing maintainable?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >