Search Results

Search found 8603 results on 345 pages for 'altering tables'.

Page 76/345 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • Rails Fixtures Don't Seem to Support Transitive Associations

    - by Rick Wayne
    So I've got 3 models in my app, connected by HABTM relations: class Member < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :software_instances end class SoftwareInstance < ActiveRecord::Base end And I have the appropriate join tables, and use foxy fixtures to load them: -- members.yml -- rick: name: Rick groups: cals -- groups.yml -- cals: name: CALS software_instances: delphi -- software_instances.yml -- delphi: name: No, this is not a joke, someone in our group is still maintaining Delphi apps... And I do a rake db:fixtures:load, and examine the tables. Yep, the join tables groups_members and groups_software_instances have appropriate IDs in, big numbers generated by hashing the fixture names. Yay! And if I run script/console, I can look at rick's groups and see that cals is in there, or cals's software_instances and delphi shows up. (Likewise delphi knows that it's in the group cals.) Yay! But if I look at rick.groups[0].software_instances...nada. []. And, oddly enough, rick.groups[0].id is something like 30 or 10, not 398398439. I've tried swapping around where the association is defined in the fixtures, e.g. -- members.yml -- rick name: rick -- groups.yml -- cals name: CALS members: rick No error, but same result. (Later: Confirmed, same behavior on MySQL. In fact I get the same thing if I take the foxification out and use ERB to directly create the join tables, so: -- license_groups_software_instances -- cals_delphi: group_id: <%= Fixtures.identify 'cals' % software_instance_id: <%= Fixtures.identify 'delphi' $ Before I chuck it all, reject foxy fixtures and all who sail in her and just hand-code IDs...anybody got a suggestion as to why this is happening? I'm currently working in Rails 2.3.2 with sqlite3, this occurs on both OS X and Ubuntu (and I think with MySQL too). TIA, my compadres.

    Read the article

  • OleDb database to DataSet and back in c#?

    - by Troy
    I'm writing a program that lets a user: Connect to an (arbitrary) View all of the tables in that database in separate DataGridViews Edit them in the program, generate random data, and see the results Choose to commit those changes or revert So I discovered the DataSet class, which looks like it's capable of holding everything that a database would, and I decided that the best thing to do here would be to load everything into one dataset, let the user edit it, and then save the dataset back to the database. The problem is that the only way I can find to load the database tables is this: set = new DataSet(); DataTable schema = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Tables, new string[] { null, null, null, "TABLE" }); foreach (DataRow row in schema.Rows) { string tableName = row.Field<string>("TABLE_NAME"); DataTable dTable = new DataTable(); new OleDbDataAdapter("SELECT * FROM " + tableName, connection).Fill(dTable); dTable.TableName = tableName; set.Tables.Add(dTable); } while it seems like there should be a simpler way given that datasets appear to be designed for exactly this purpose. The real problem though is when I try saving these things. In order to use the OleDbDataAdapter.Update() method, I'm told that I have to provide valid INSERT queries. Doesn't that kind of negate the whole point of having a class to handle this stuff for me? Anyway, I'm hoping somebody can either explain how to load and save a database into a dataset or maybe give me a better idea of how to do what I'm trying to do. I could always parse the commands together myself, but that doesn't seem like the best solution.

    Read the article

  • PostgreSQL: Auto-partition a table

    - by Adam Matan
    Hi, I have a huge database which holds pairs of numbers (A,B), each ranging from 0 to 10,000 and stored as floats. e.g., (1, 9984.4), (2143.44, 124.243), (0.55, 0), ... Since the PostgreSQL table which stores these pairs grew quite large, I have decided to partition it into inheriting sub-tables. I intend to create 100 such tables, each storing a range of 1000x1000. The problem is that these numbers tend to come in large chunks of nearby numbers. It means that in the future, some tables will be nearly empty and some will hold a very large portion of the database. Unfortunately, the distribution of future pairs is yet unknown. I am looking for a way to automatically repartition my table. That means that if a certain subtable holds more than a specific number of pairs, it will be automatically partitioned into four sub-sub tables, and so on. My questions are: Is recursive partitioning and inheritance possible in PostgreSQL 8.3? Will indexes and query plans understand it? What's the best way to split a subtable once it grew too large? I should point out that this isn't a live database, so a downtime of few hours every week is totally acceptable. Thanks in advance, Adam

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • choosing row (from group) with max value in a SQL Server Database

    - by sriehl
    I have a large database and am putting together a report of the data. I have aggregated and summed the data from many tables to get two tables that look like the following. id | code | value id | code | value 13 | AA | 0.5 13 | AC | 2.0 13 | AB | 1.0 14 | AB | 1.5 14 | AA | 2.0 13 | AA | 0.5 15 | AB | 0.5 15 | AB | 3.0 15 | AD | 1.5 15 | AA | 1.0 I need to get a list of id's, with the code (sumed from both tables) with the largest value. 13 | AC 14 | AA 15 | AB There are 4-6 thousand records and it is not possible to change the original tables. I'm not too worried about performance as I only need to run it a few times a year. edit: Let me see if I can explain a bit more clearly, imagine the id is the customer id, the code is who they ordered from and the value is how much they spent there. I need a list of the all the customer id's and the store that customer spent the most money at (and if they spent the same at two different stores, put a value such as 'ZZ' in for the store name).

    Read the article

  • NoSuchMethodError in Java using XStream

    - by Brad Germain
    I'm trying to save a database into a file using XStream and then open it again later using XStream and deserialize it back into the objects it was in previously. The database consists of an arraylist of tables, which consists of an arraylist of a data class where the data class contains an arraylist of objects. I'm basically trying to create an sql compiler. I'm currently getting a java.lang.NoSuchMethodError because of the last line in the load method. Here's what I have: Save Method public void save(Database DB){ File file = new File(DB.getName().toUpperCase() + ".xml"); //Test sample DB.createTable("TBL1(character(a));"); DB.tables.getTable("TBL1").rows.add(new DataList()); DB.tables.getTable("TBL1").rows.getRow(0).add(10); XStream xstream = new XStream(); //Database xstream.alias("Database", Database.class); //Tables xstream.alias("Table", Table.class); //Rows xstream.alias("Row", DataList.class); //Data //xstream.alias("Data", Object.class); //String xml = xstream.toXML(DB); Writer writer = null; try { writer = new FileWriter(file); writer.write(xstream.toXML(DB)); writer.close(); } catch (IOException e) { e.printStackTrace(); } } Load Method public void Load(String dbName){ XStream xstream = new XStream(); BufferedReader br; StringBuffer buff = null; try { br = new BufferedReader(new FileReader(dbName + ".xml")); buff = new StringBuffer(); String line; while((line = br.readLine()) != null){ buff.append(line); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } database = (Database)xstream.fromXML(buff.toString()); }

    Read the article

  • How can I stop an auto-generated Linq to SQL class from loading ALL data?

    - by Gary McGill
    DUPLICATE of http://stackoverflow.com/questions/2433422/how-can-i-stop-an-auto-generated-linq-to-sql-class-from-loading-all-data post answers there! I have an ASP.NET MVC project, much like the NerdDinner tutorial example. (I'm using MVC 2, but followed the NerdDinner tutorial in order to create it). As per the instructions in part 3 of the tutorial, I've created a Linq-to-SQL model of my database by creating a "Linq to SQL Classes" (.dbml) surface, and dropping my database tables onto it. The designer has automatically added relationships between the generated classes based on my database tables. Let's say that my classes are as per the NerdDinner example, so I have Dinner and RSVP tables, where each Dinner record is associated with many RSVP records - hence in the generated classes, the Dinner object has a RSVPs property which is a list of RSVP objects. My problem is this: it appears (and I'd be gladly proved wrong on this) that as soon as I access a Dinner object, it's loading all of the corresponding RSVP objects, even if I don't use the RSVPs member. First question: is this really the default behavior for the generated classes? In my particular situation, the object graph contains many more tables (which have an order of magnitude more records), and so this is disastrous behaviour - I'd be loading tons of data when all I want to do is show the details of a single parent record. Second question: are there any properties exposed through the designer UI that would let me modify this behavior? (I can't find any). Third question: I've seen a description of how to control the loading of related records in a DataContext by using a DataShape object associated with the DataContext. Is that what I'm meant to do, and if so are there any tutorials like the NerdDinner one that would show not only how to do it, but also suggest a 'pattern' for normal use?

    Read the article

  • How can I stop an auto-generated Linq to SQL class from loading ALL data?

    - by Gary McGill
    I have an ASP.NET MVC project, much like the NerdDinner tutorial example. (I'm using MVC 2, but followed the NerdDinner tutorial in order to create it). As per the instructions in part 3 of the tutorial, I've created a Linq-to-SQL model of my database by creating a "Linq to SQL Classes" (.dbml) surface, and dropping my database tables onto it. The designer has automatically added relationships between the generated classes based on my database tables. Let's say that my classes are as per the NerdDinner example, so I have Dinner and RSVP tables, where each Dinner record is associated with many RSVP records - hence in the generated classes, the Dinner object has a RSVPs property which is a list of RSVP objects. My problem is this: it appears (and I'd be gladly proved wrong on this) that as soon as I access a Dinner object, it's loading all of the corresponding RSVP objects, even if I don't use the RSVPs member. First question: is this really the default behavior for the generated classes? In my particular situation, the object graph contains many more tables (which have an order of magnitude more records), and so this is disastrous behaviour - I'd be loading tons of data when all I want to do is show the details of a single parent record. Second question: are there any properties exposed through the designer UI that would let me modify this behavior? (I can't find any). Third question: I've seen a description of how to control the loading of related records in a DataContext by using a DataShape object associated with the DataContext. Is that what I'm meant to do, and if so are there any tutorials like the NerdDinner one that would show not only how to do it, but also suggest a 'pattern' for normal use?

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • WPF Toolkit: Nullable object must have a value

    - by Via Lactea
    Hi All, I am trying to create some line charts from a dataset and getting an error (WPF+WPF Toolkit + C#): Nullable object must have a value Here is a code that I use to add some data points to the chart: ObservableCollection points = new ObservableCollection(); foreach (DataRow dr in dc.Tables[0].Rows) { points.Add(new VelChartPoint() { Label = dr[0].ToString(), Value = double.Parse(dr[1].ToString()) }); } Here is a class VelChartPoint public class VelChartPoint : VelObject, INotifyPropertyChanged { public DateTime Date { get; set; } public string Label { get; set; } private double _Value; public double Value { get { return _Value; } set { _Value = value; var handler = PropertyChanged; if (null != handler) { handler.Invoke(this, new PropertyChangedEventArgs("Value")); } } } public string FieldName { get; set; } public event PropertyChangedEventHandler PropertyChanged; public VelChartPoint() { } } So the problem occures in this part of the code points.Add(new VelChartPoint { Name = dc.Tables[0].Rows[0][0].ToString(), Value = double.Parse(dc.Tables[0].Rows[0][1].ToString()) } ); I've made some tests, here are some results i've found out. This part of code does'nt work for me: string[] labels = new string[] { "label1", "label2", "label3" }; foreach (string label in labels) { points.Add(new VelChartPoint { Name = label, Value = 500.0 } ); } But this one works fine: points.Add(new VelChartPoint { Name = "LabelText", Value = double.Parse(dc.Tables[0].Rows[0][1].ToString()) } ); Please, help me to solve this error.

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • Avoiding Duplicate Data in DB (for use with Rails)

    - by ants
    I have five tables that I am trying to get to work nicely together but may need some help. I have three main tables: accounts members and roles. With two join tables account_members and account_member_roles. The accounts and members table are joined by account_members (fk account_id and member_id) table. The other 2 tables are the problem (roles and account_member_roles). A member of an account can have more than one role and I have the account_member_roles (fk account_member_id and role_id) table joining the account_members join table and the roles table. That seems logical but can you have a relationship with a join table? What I'd like to be able to do is when creaeting an account, for instance, I would like @account.save to include the roles and update the account_member_roles table neatly ..... but through the account_members join table. I've tried ..... accept_nested_attributes_for :members, :account_member_roles in the account.rb but I get ..... ActiveRecord::HasManyThroughCantAssociateThroughHasManyReflection (Cannot modify association 'Account#account_member_roles' because the source reflection class 'AccountMemberRole' is associated to 'AccountMember' via :has_many.) upon trying to save a record. Any advice on how I should approach this? CIA -ants

    Read the article

  • Best practices for managing updating a database with a complex set of changes

    - by Sarge
    I am writing an application where I have some publicly available information in a database which I want the users to be able to edit. The information is not textual like a wiki but is similar in concept because the edits bring the public information increasingly closer to the truth. The changes will affect multiple tables and the update needs to be automatically checked before affecting the public tables. I'm working on the design and I'm wondering if there are any best practices that might help with some particular issues. I want to provide undo capability. I want to show the user the combined result of all their changes. When the user says they're done, I need to check the underlying public data to make sure it hasn't been changed by somebody else. My current plan is to have the user work in a set of tables setup to be a private working area. Once they're ready they can kick off a process to check everything and update the public tables. Undo can be recorded using Command pattern saving to a table. Are there any techniques I might have missed or useful papers or patterns? Thanks in advance!

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • Database relationships using phpmyAdmin (composite keys)

    - by Cool Hand Luke UK
    Hi, I hope this question is not me being dense. I am using phpmyAdmin to create a database. I have the following four tables. Don't worry about that fact place and price are optional they just are. Person (Mandatory) Item (Mandatory) Place (Optional) Price (Optional) Item is the main table. It will always have person linked. * I know you do joins in mysql for the tables. If I want to link the tables together I could use composite keys (using the ids from each table), however is this the most correct way to link the tables? It also means item will have 5 ids including its own. This all cause null values (apparently a big no no, which I can understand) because if place and price are optional and are not used on one entry to the items table I will have a null value there. Please help! Thanks in advance. I hope this makes sense.

    Read the article

  • MS SQL : Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • Convert HTML + CSS to PDF with PHP?

    - by cletus
    Ok, I'm now banging my head against a brick wall with this one. I have an HTML (not XHTML) document that renders fine in Firefox 3 and IE 7. It uses fairly basic CSS to style it and renders fine in HTML. I'm now after a way of converting it to PDF. I have tried: DOMPDF: it had huge problems with tables. I factored out my large nested tables and it helped (before it was just consuming up to 128M of memory then dying--thats my limit on memory in php.ini) but it makes a complete mess of tables and doesn't seem to get images. The tables were just basic stuff with some border styles to add some lines at various points; HTML2PDF and HTML2PS: I actually had better luck with this. It rendered some of the images (all the images are Google Chart URLs) and the table formatting was much better but it seemed to have some complexity problem I haven't figured out yet and kept dying with unknown node_type() errors. Not sure where to go from here; and Htmldoc: this seems to work fine on basic HTML but has almost no support for CSS whatsoever so you have to do everything in HTML (I didn't realize it was still 2001 in Htmldoc-land...) so it's useless to me. I tried a Windows app called Html2Pdf Pilot that actually did a pretty decent job but I need something that at a minimum runs on Linux and ideally runs on-demand via PHP on the Webserver. I really can't believe I'm this stuck. Am I missing something?

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • De-normalization alternative to specific MYSQL problem?

    - by Booker
    I am facing quite a specific optimization problem. I currently have 4 normalized tables of data. Every second, possibly thousands of users will pull down up-to-date info from these tables using AJAX. The thing is that I can predict relatively easily which subset of data they need... The most recent 100 or so entries in those 4 normalized tables. I have been researching de-normalization... but feel that perhaps there is an easier solution. I was thinking that I could somehow every second run one sql query to condense the needed info, store it in a temp cached table and then have all of the user queries just draw from this. This will allow the complex join of 4 tables to only be run once, and then from there the users just need to do a simple lookup from the cached table. I really don't know if this is feasible. Comments on this or any other suggestions would be much appreciated. Thanks!

    Read the article

  • How can I get back my privilege to create a new database in MySQL?

    - by Steven
    I can not use MySQL. MySQL is on my local computer. Currently I added skip-grant-tables in My.ini so I can use MySQL. But I have no privilege to create a new database. My problem is tough, although I asked related questions on SO, but no answer can resolve my problem. I almost give up. So I lower my expectation. I am developing a website, so I need to create database, tables and operate tables. You don't have to consider security. Is there a simple solution that can give me privilege to create a new database? Maybe by adding some command in my.ini or something? You won't need to completely resolve my problem. Maybe after the development, I will upload the database and tables to another server(The current database server is my personal computer, windows XP) so I can uninstall and reinstall MySQL. The root of problem is that I lack privileges.

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • How do I add two alternating UITableViews to an existing UINavigationController view?

    - by Ron Flax
    I'm new to iPhone development and I am writing an iPhone app that needs two different table views, which are selectable using a button bar or tab bar. These table views are both the same size, but only cover about two thirds of the screen from the bottom up. The top portion of the screen remains the same when either of these tables is displayed. I'd also like to animate (flip) these views when the user selects one or the other. The view that these two tables will be displayed on is the detail view of my app where the user has already selected an item from the primary screen's table. I'm using a UINavigationController to manage the primary and detail views and I have this working. I also have the first of these two detail tables working as part of my detail view, but I think it makes more sense to isolate the code for these two tables and not duplicate all of the code for the part of the detail view that doesn't change. I don't really care how these two table views are created (either in code or via IB). I've tried several things and I can't seem to figure it out. Any help or ideas (with sample code) would be greatly appreciated!

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >