Search Results

Search found 27723 results on 1109 pages for 'sql puzzle'.

Page 637/1109 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • Loop Stored Procedure in VB.Net (win form)

    - by Mo
    Hello, I am trying to run a for loop for a backup system and inside that i want to run a SP that will loop. Below is the code that does not work for me.. Any ideas please? Dim TotalTables As Integer Dim i As Integer TotalTables = 10 For i = 1 To TotalTables objDL.BackupTables(220, i, 001) ' (This is a method from the DL and the 3 parameters are integars) Next I tried the SP and it works perfectly in SQLServer

    Read the article

  • Help ! How do I get the total number rows from my mssql paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my MSSQL database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my datalist. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a partcular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my datalist pager.Now I need the total number of all comments(not in the cuurent page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesnt work and raises an error.I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • When calling CRUD check if "parent" exists with read or join?

    - by Trick
    All my entities can not be deleted - only deactivated, so they don't appear in any read methods (SELECT ... WHERE active=TRUE). Now I have some 1:M tables on this entities on which all CRUD operations can be executed. What is more efficient or has better performance? My first solution: To add to all CRUD operations: UPDATE ... JOIN entity e ... WHERE e.active=TRUE My second solution: Before all CRUD operations check if entity is active: if (getEntity(someId) != null) { //do some CRUD } In getEntity there's just SELECT * FROM entity WHERE id=? AND active=TRUE. Or any other solution, recommendation,...?

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • Change column names of a cube action as they appear in Visual Studio

    - by hermann
    the title pretty much says it all. I have a cube with data in it and I have yet to find a way to change the column names. They appear in a very ugly manner like [cubeName].[$dimension.columnName]. I have tried everything I know and anything I found on the web but nothing seems to be working. What I tried to do in most cases is create an Action in the Actions tab and write some MDX query language in there. No results whatsoever. As if the action is never run. Does anyone know how to do this? I've spent about 3 days trying to figure this out. Thank you.

    Read the article

  • What are the rules governing how a bind variable can be used in Postgres and where is this defined?

    - by Craig Miles
    I can have a table and function defined as: CREATE TABLE mytable ( mycol integer ); INSERT INTO mytable VALUES (1); CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol = l_myvar; RETURN l_myrow; END; $$ LANGUAGE plpgsql; In this case l_myvar acts as a bind variable for the value passed when I call: SELECT * FROM myfunction(1); and returns the row where mycol = 1 If I redefine the function as: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (l_myvar); RETURN l_myrow; END; $$ LANGUAGE plpgsql; SELECT * FROM myfunction(1); still returns the row where mycol = 1 However, if I now change the function definition to allow me to pass an integer array and try to this array in the IN clause, I get an error: CREATE OR REPLACE FUNCTION myfunction (l_myvar integer[]) RETURNS mytable AS $$ DECLARE l_myrow mytable; BEGIN SELECT * INTO l_myrow FROM mytable WHERE mycol IN (array_to_string(l_myvar, ',')); RETURN l_myrow; END; $$ LANGUAGE plpgsql; Analysis reveals that although: SELECT array_to_string(ARRAY[1, 2], ','); returns 1,2 as expected SELECT * FROM myfunction(ARRAY[1, 2]); returns the error operator does not exist: integer = text at the line: WHERE mycol IN (array_to_string(l_myvar, ',')); If I execute: SELECT * FROM mytable WHERE mycol IN (1,2); I get the expected result. Given that array_to_string(l_myvar, ',') evaluates to 1,2 as shown, why arent these statements equivalent. From the error message it is something to do with datatypes, but doesnt the IN(variable) construct appear to be behaving differently from the = variable construct? What are the rules here? I know that I could build a statement to EXECUTE, treating everything as a string, to achieve what I want to do, so I am not looking for that as a solution. I do want to understand though what is going on in this example. Is there a modification to this approach to make it work, the particular example being to pass in an array of values to build a dynamic IN clause without resorting to EXECUTE? Thanks in advance Craig

    Read the article

  • introduce a join to this query, possible?

    - by Iain Urquhart
    I'm trying to introduce a join to this query: SELECT `n`.*, round((`n`.`rgt` - `n`.`lft` - 1) / 2, 0) AS childs, count(*) - 1 + (`n`.`lft` > 1) + 1 AS level, ((min(`p`.`rgt`) - `n`.`rgt` - (`n`.`lft` > 1)) / 2) > 0 AS lower, (((`n`.`lft` - max(`p`.`lft`) > 1))) AS upper FROM `exp_node_tree_6` `n`, `exp_node_tree_6` `p`, `exp_node_tree_6` WHERE `n`.`lft` BETWEEN `p`.`lft` AND `p`.`rgt` AND ( `p`.`node_id` != `n`.`node_id` OR `n`.`lft` = 1 ) GROUP BY `n`.`node_id` ORDER BY `n`.`lft` by adding LEFT JOIN `exp_channel_titles` ON (`n`.`entry_id`=`exp_channel_titles`.`entry_id`) after the FROM statement... But when I introduce it, it fails with "Unknown column 'n.entry_id' in 'on clause'" Is it even possible to add a join to this query? Can anybody help, thanks!

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Why am I unable to create a trigger using my SqlCommand?

    - by acidzombie24
    The line cmd.ExecuteNonQuery(); cmd.CommandText CREATE TRIGGER subscription_trig_0 ON subscription AFTER INSERT AS UPDATE user_data SET msg_count=msg_count+1 FROM user_data JOIN INSERTED ON user_data.id = INSERTED.recipient; The exception: Incorrect syntax near the keyword 'TRIGGER'. Then using VS 2010, connected to the very same file (a mdf file) I run the query above and I get a success message. WTF!

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • Why is SQLite3 using covering indices instead of the indices I created?

    - by Geoff
    I have an extremely large database (contacts has ~3 billion entries, people has ~280 million entries, and the other tables have a negligible number of entries). Most other queries I've run are really fast. However, I've encountered a more complicated query that's really slow. I'm wondering if there's any way to speed this up. First of all, here is my schema: CREATE TABLE activities (id INTEGER PRIMARY KEY, name TEXT NOT NULL); CREATE TABLE contacts ( id INTEGER PRIMARY KEY, person1_id INTEGER NOT NULL, person2_id INTEGER NOT NULL, duration REAL NOT NULL, -- hours activity_id INTEGER NOT NULL -- FOREIGN_KEY(person1_id) REFERENCES people(id), -- FOREIGN_KEY(person2_id) REFERENCES people(id) ); CREATE TABLE people ( id INTEGER PRIMARY KEY, state_id INTEGER NOT NULL, county_id INTEGER NOT NULL, age INTEGER NOT NULL, gender TEXT NOT NULL, -- M or F income INTEGER NOT NULL -- FOREIGN_KEY(state_id) REFERENCES states(id) ); CREATE TABLE states ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, abbreviation TEXT NOT NULL ); CREATE INDEX activities_name_index on activities(name); CREATE INDEX contacts_activity_id_index on contacts(activity_id); CREATE INDEX contacts_duration_index on contacts(duration); CREATE INDEX contacts_person1_id_index on contacts(person1_id); CREATE INDEX contacts_person2_id_index on contacts(person2_id); CREATE INDEX people_age_index on people(age); CREATE INDEX people_county_id_index on people(county_id); CREATE INDEX people_gender_index on people(gender); CREATE INDEX people_income_index on people(income); CREATE INDEX people_state_id_index on people(state_id); CREATE INDEX states_abbreviation_index on states(abbreviation); CREATE INDEX states_name_index on states(name); Note that I've created an index on every column in the database. I don't care about the size of the database; speed is all I care about. Here's an example of a query that, as expected, runs almost instantly: SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; Here's the troublesome query: SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); Now, what I think would happen is that each subquery would use each relevant index I've created to speed this up. However, when I show the query plan, I see this: sqlite> EXPLAIN QUERY PLAN SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); 0|0|0|SEARCH TABLE contacts USING INTEGER PRIMARY KEY (rowid=?) (~25 rows) 0|0|0|EXECUTE LIST SUBQUERY 1 2|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 2|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 2|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person1_id_index (person1_id=?) (~12 rows) 3|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 3|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 3|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person2_id_index (person2_id=?) (~12 rows) 1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (INTERSECT) In fact, if I show the query plan for the first query I posted, I get this: sqlite> EXPLAIN QUERY PLAN SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; 0|0|1|SEARCH TABLE states USING COVERING INDEX states_abbreviation_index (abbreviation=?) (~1 rows) 0|1|0|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) Why is SQLite using covering indices instead of the indices I created? Shouldn't the search in the people table be able to happen in log(n) time given state_id which in turn is found in log(n) time?

    Read the article

  • Indexes and multi column primary keys

    - by David Jenings
    Went searching and didn't find the answer to this specific noob question. My apologies if I missed it. In a MySQL database I have a table with the following primary key PRIMARY KEY id (invoice, item) In my application I will also frequently be selecting on "item" by itself and less frequently on only "invoice". I'm assuming I would benefit from indexes on these columns. MySQL does not complain when I define the following: INDEX (invoice), INDEX (item), PRIMARY KEY id (invoice, item) But I don't see any evidence (using DESCRIBE -- the only way I know how to look) that separate indexes have been established for these two columns. So the question is, are the columns that make up a primary key automatically indexed individually? Also, is there a better way than DESCRIBE to explore the structure of my table?

    Read the article

  • How to apply GROUP_CONCAT in mysql Query

    - by Query Master
    How to apply GROUP_CONCAT in this Query if you guys have any idea or any alternate solution about this please share me. Helps are definitely appreciated also (see Query or result required) Query SELECT WEEK(cpd.added_date) AS week_no,COUNT(cpd.result) AS death_count FROM cron_players_data cpd WHERE cpd.player_id = 81 AND cpd.result = 2 AND cpd.status = 1 GROUP BY WEEK(cpd.added_date); Query output result screen Result Required 23,24,25 AS week_no 2,3,1 AS death_count

    Read the article

  • How to retrieve indentity column vaule after insert using LINQ

    - by Hobey
    Could any of you please show me how to complete the following tasks? // Prepare object to be saved // Note that MasterTable has MasterTableId as a Primary Key and it is an indentity column MasterTable masterTable = new MasterTable(); masterTable.Column1 = "Column 1 Value"; masterTable.Column2 = 111; // Instantiate DataContext DataContext myDataContext = new DataContext("<<ConnectionStrin>>"); // Save the record myDataContext.MasterTables.InsertOnSubmit(masterTable); myDataContext.SubmitChanges(); // ?QUESTION? // Now I need to retrieve the value of MasterTableId for the record just inserted above. Kind Regards

    Read the article

  • DateTime in SqlServer VisualStudio C#

    - by menacheb
    Hi, I Have a DataBase in my project With Table named 'ProcessData' and columns named 'Start_At' (Type: DateTime) and 'End_At' (Type: DateTime) . When I try to enter a new record into this table, it enter the data in the following format: 'YYYY/MM/DD HH:mm', when I actualy want it to be in that format: 'YYYY/MM/DD HH:mm:ss' (the secondes dosen't apper). Does anyone know why, and what should I do in order to fix this? Here is the code I using: con = new SqlConnection("...."); String startAt = "20100413 11:05:28"; String endAt = "20100414 11:05:28"; ... con.Open();//open the connection, in order to get access to the database SqlCommand command = new SqlCommand("insert into ProcessData (Start_At, End_At) values('" + startAt + "','" + endAt + "')", con); command.ExecuteNonQuery();//execute the 'insert' query. con.Close(); Many thanks

    Read the article

  • Parse filename, insert to SQL

    - by jakesankey
    Thanks to Code Poet, I am now working off of this code to parse all .txt files in a directory and store them in a database. I need a bit more help though... The file names are R303717COMP_148A2075_20100520.txt (the middle section is unique per file). I would like to add something to code so that it can parse out the R303717COMP and put that in the left column of the database such as: (this is not the only R number we have) R303717COMP data data data R303717COMP data data data R303717COMP data data data etc Lastly, I would like to have it store each full file name into another table that gets checked so that it doesn't get processed twice.. Any Help is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; namespace CSVImport { internal class Program { private static void Main(string[] args) { using (SQLiteConnection con = new SQLiteConnection("data source=data.db3")) { if (!File.Exists("data.db3")) { con.Open(); using (SQLiteCommand cmd = con.CreateCommand()) { cmd.CommandText = @" CREATE TABLE [Import] ( [RowId] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [FeatType] varchar, [FeatName] varchar, [Value] varchar, [Actual] decimal, [Nominal] decimal, [Dev] decimal, [TolMin] decimal, [TolPlus] decimal, [OutOfTol] decimal, [Comment] nvarchar);"; cmd.ExecuteNonQuery(); } con.Close(); } con.Open(); using (SQLiteCommand insertCommand = con.CreateCommand()) { insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, Comment) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @Comment);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string[] files = Directory.GetFiles(Environment.CurrentDirectory, "TextFile*.*"); foreach (string file in files) { string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] {','}); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.ExecuteNonQuery(); } } } con.Close(); } } } }

    Read the article

  • Storing datetime in database?

    - by Curtis White
    I'm working on a blog and want to show my posts in eastern time zone. i figured that storing everything UTC would be the proper way. This creates a few challenges though: I have to convert all times from UTC to Eastern. This is not a biggie but adds a lot of code. And the "biggie" is that I use a short-date time to reference the posts by passing in a query, ala blogger. The problem is that there is no way to convert the short date time to the proper UTC date because I'm lacking the posted time info. Hmm, any problem to just storing all dates in eastern time? This would certainly make it easier for the rest of the application but if I needed to change time zones everything would be stored wrong.

    Read the article

  • Disable animations when ending search in iPhone

    - by camilo
    Hi. A quicky: is there a way to dismiss the keyboard and the searchDisplayController without animation? I was able to do it when the user presses "Cancel", but when the user presses the black "window thingy" above search field (only visible while the user hasn't inserted any text), the animation always occurs, even when I change the delegate functions. Is there a way to control this, or as an alternative, to disable the user to end searching by pressing the black window? Thanks in advance.

    Read the article

  • MSSQL: Primary Key Schema Largely Guid but Sometimes Integer Types...

    - by Code Sherpa
    OK, this may be a silly question but... I have inherited a project and am tasked with going over the primary key relationships. The project largely uses Guids. I say "largely" because there are examples where tables use integral types to reflect enumerations. For example, dbo.MessageFolder has MessageFolderId of type int to reflect public emum MessageFolderTypes { inbox = 1, sent = 2, trash = 3, etc... } This happens a lot. There are tables with primary keys of type int which is unavoidable because of their reliance on enumerations and tables with primary keys of type Guid which reflect the primary key choice on the part of the previous programmer. Should I care that the PK schema is spotty like this? It doesn't feel right but does it really matter? If this could create a problem, how do I get around it (I really can't move all PKs to type int without serious legwork and I have never heard of enumerations that have guid values)? Thanks.

    Read the article

  • is there a better way to write this frankenstein LINQ query that searches for values in a child tabl

    - by MRV
    I have a table of Users and a one to many UserSkills table. I need to be able to search for users based on skills. This query takes a list of desired skills and searches for users who have those skills. I want to sort the users based on the number of desired skills they posses. So if a users only has 1 of 3 desired skills he will be further down the list than the user who has 3 of 3 desired skills. I start with my comma separated list of skill IDs that are being searched for: List<short> searchedSkillsRaw = skills.Value.Split(',').Select(i => short.Parse(i)).ToList(); I then filter out only the types of users that are searchable: List<User> users = (from u in db.Users where u.Verified == true && u.Level > 0 && u.Type == 1 && (u.UserDetail.City == city.SelectedValue || u.UserDetail.City == null) select u).ToList(); and then comes the crazy part: var fUsers = from u in users select new { u.Id, u.FirstName, u.LastName, u.UserName, UserPhone = u.UserDetail.Phone, UserSkills = (from uskills in u.UserSkills join skillsJoin in configSkills on uskills.SkillId equals skillsJoin.ValueIdInt into tempSkills from skillsJoin in tempSkills.DefaultIfEmpty() where uskills.UserId == u.Id select new { SkillId = uskills.SkillId, SkillName = skillsJoin.Name, SkillNameFound = searchedSkillsRaw.Contains(uskills.SkillId) }), UserSkillsFound = (from uskills in u.UserSkills where uskills.UserId == u.Id && searchedSkillsRaw.Contains(uskills.SkillId) select uskills.UserId).Count() } into userResults where userResults.UserSkillsFound > 0 orderby userResults.UserSkillsFound descending select userResults; and this works! But it seems super bloated and inefficient to me. Especially the secondary part that counts the number of skills found. Thanks for any advice you can give. --r

    Read the article

  • alter mysqldump file before import

    - by julio
    Hi-- I have a mysqldump file created from an earlier version of a product that can't be imported into a new version of the product, since the db structure has changed slightly (mainly altering a column that was NOT NULL DEFAULT 0 to UNIQUE KEY DEFAULT NULL). If I just import the old dump file, it will error out since the column that has default values of 0 now breaks the UNIQUE constraint. It would be easy enough to either manually alter the mysqldump file, or import into a temp table and change it, then copy to the new table. However, is there a way to do this programatically, so it will be repeatable and not manual? (this will need to happen for many instances of this product). I'm thinking something like disabling key constraints for the import, then setting all values that = 0 to NULL, then re-enabling the key constraints? Is this possible? Any help appreciated.

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >