Search Results

Search found 4461 results on 179 pages for 'duplicate removal'.

Page 27/179 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Getting duplicate count when executing INSERT IGNORE via JDBC

    - by Nickolay Komar
    Is it possible to get the duplicate count when executing MySQL "INSERT IGNORE" statement via JDBC? For example, when I execute an INSERT IGNORE statement on the mysql command line, and there are duplicates I get something like Query OK, 0 rows affected (0.02 sec) Records: 1 Duplicates: 1 Warnings: 0 Note where it says "Duplicates: 1", indicating that there were duplicates that were ignored. Is it possible to get the same information when executing the query via JDBC? Thanks.

    Read the article

  • remove duplicate from string in PHP

    - by Adnan
    Hello, I am looking for the fastest way to remove duplicate values in a string separated by commas. So my string looks like this; $str = 'one,two,one,five,seven,bag,tea'; I can do it be exploding the string to values and then compare, but I think it will be slow. what about preg_replace() will it be faster? Any one did it using this function?

    Read the article

  • Remove duplicate characters and keep the uniq ones

    - by manu
    How do I remove duplicate characters and keep the uniq one only. Ex. My input is EFUAHUU UUUEUUUUH UJUJHHACDEFUCU Expected output is EFUAH UEH UJHACDEF I cam across perl -pe's/$1//gwhile/(.).*\/' which is wonderful but it is removing even the single occurence of the character in output. Can anyone help. Thanks in advance Manjeet

    Read the article

  • changing existing duplicate entries in mysql

    - by Mladen
    sorry for the (probably) noob question, but I', new at this stuff. i have a table that has column 'position', and I want to, when inserting a new row, change positions of every row that has position lower than row that is inserted. for example, if I add row with position 4, if there is a duplicate, it should become 5, 5 should shift to 6 and so on... also, is there a way to get highest value for column besides testing it in every row via php?

    Read the article

  • Why Duplicate the “Release” configuration to "Disctribution"?

    - by Horace Ho
    On the Apple guide, there is a step before building the AppStore version: Open the Xcode project and Duplicate the “Release” configuration in the Configurations pane of the project's Info panel. Rename this new configuration “Distribution”. Why this step is needed? Can I skip this step and use the "Release" configuration to build the final version for AppStore?

    Read the article

  • Java Multi threading - Avoid duplicate request processing

    - by seawaves
    I have following multi threaded environment scenario - Requests are coming to a method and I want to avoid the duplicate processing of concurrent requests coming. As multiple similar requests might be waiting for being processed in blocked state. I used hashtable to keep track of processed request, but it will create memory leaks, so how should keep track of processed request and avoid the same requests to be processed which may be in blocking state.

    Read the article

  • Removing "duplicate objects"

    - by keruilin
    Let's say I have an array of objects from the same class, with two attributes of concern here: name and created_at. How do I find objects with the same name (considered dups) in the array, and then delete the duplicate record in the database. The object with the most-recent created_at date, however, is the one that must be deleted.

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • Maven 3 - duplicate declaration of version

    - by Taylor Leese
    I just upgraded to m2eclipse version 0.10 which includes an embedded Maven 3 and I'm now getting the following error when trying to run a build. I've already set Maven-Installations to my Maven 2 installation, but it had no effect. How do I resolve this? [INFO] Scanning for projects... [ERROR] The build could not read 1 project -> [Help 1] [ERROR] The project com.stuff:sutff-web:0.0.1-SNAPSHOT (C:\development\taylor\stuff\pom.xml) has 1 error [ERROR] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: com.google.appengine:appengine-api-labs:jar -> duplicate declaration of version ${gae.version} [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException

    Read the article

  • Xcode duplicate symbol _main .

    - by Prithvi Raj
    I'm getting the following error in Xcode 3.2.1 on Snow Leopard 10.6.2 whenever I try to compile any iPhone application generated by Appcelerator's Titanium . However , the build error only appears when I select iPhone simulator on the architecture menu , and if I select iPhone device , I am able to run the app on my device . Further , the iPhone simulator launches successfully and executes the program directly from the Titanium environment , which uses Xcode to build . Why is this happening ? ld: duplicate symbol _main in Resources/libTitanium.a(main.o) and /Users/prithviraj/Documents/project/Final/build/iphone/build/Final.build/Debug-iphonesimulator/Final.build/Objects-normal/i386/main.o collect2: ld returned 1 exit status Command /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.2 failed with exit code 1

    Read the article

  • mySQL auto increment problem: Duplicate entry '4294967295' for key 1

    - by Josh
    I have a table of emails. The last record in there for an auto increment id is 3780, which is a legit record. Any new record I now insert is being inserted right there. However, in my logs I have the occasional: Query FAIL: INSERT INTO mail.messages (timestamp_queue) VALUES (:time); Array ( [0] => 23000 [1] => 1062 [2] => Duplicate entry '4294967295' for key 1 ) Somehow, the autoincrement jumped up to the INT max of 4294967295 Why on god's green earth would this get jumped up so high? I have no inserts with an id field. The show status for that table, Auto_increment table now reads: 4294967296 How could something like this occur? I realize the id field should perhaps be a big int, but the worry I have is that somehow this thing jumps back up. Josh

    Read the article

  • use of EntityManagerFactory causing duplicate primary key exceptions

    - by bradd
    Hey guys, my goal is create an EntityManager using properties dependent on which database is in use. I've seen something like this done in all my Google searches(I made the code more basic for the purpose of this question): @PersistenceUnit private EntityManagerFactory emf; private EntityManager em; private Properties props; @PostConstruct public void createEntityManager(){ //if oracle set oracle properties else set postgres properties emf = Persistence.createEntityManagerFactory("app-x"); em = emf.createEntityManager(props); } This works and I can load Oracle or Postgres properties successfully and I can Select from either database. HOWEVER, I am running into issues when doing INSERT statements. Whenever an INSERT is done I get a duplicate primary key exception.. every time! Can anyone shed some light on why this may be happening? Thanks -Brad

    Read the article

  • False Duplicate Key Error in Entity Framework Application

    - by ProfK
    I have an ASP.NET application using an entity framework model. In an import routine, with the code below, I get a "Cannot insert duplicate key" exception for AccountNum on the SaveChanges call, but when execution stops for the exception, I can query the database for the apparently duplicated field, and no prior record exists. using (var ents = new PvmmsEntities()) { foreach (DataRow row in importedResources.Rows) { var empCode = row["EmployeeCode"].ToString(); try { var resource = ents.ActivationResources.FirstOrDefault(rs => rs.EmployeeCode == empCode); if (resource == null) { resource = new ActivationResources(); resource.EmployeeCode = empCode; ents.AddToActivationResources(resource); } resource.AccountNum = row["AccountNum"].ToString(); ents.SaveChanges(true); } catch(Exception ex) { } } }

    Read the article

  • SQL Server 2000 incorrect message: duplicate key with unique index

    - by iar
    Database server: SQL Server 2000 - 8.00.760 - SP3 - Standard Edition I have a table with 5 columns, in which I want to add a large (300.000) number of records. There is a unique index on the combination of two columns. If I add the records with this unique index in place, SQL Server gives this error message: Msg 2601, Level 14, State 3, Line 1 Cannot insert duplicate key row in object 'TESTTABLE' with unique index 'test'. The statement has been terminated. (0 row(s) affected) However, if I delete the unique index and then add the records, I do not get any error when I create this unique index afterwards.

    Read the article

  • merging two duplicate contacts/CfoldFusion

    - by jil
    having to do with data integrity - I maintain a coldfusion database at a small shop that keeps addresses of different contacts. These contacts sometimes contain notes in them. When you are merging two duplicate contacts, one may be created in 2002 and one in 2008. If the contact in 2002 has notes prior to 2008, my question would be does it matter if you merge these contacts and keep the 2008 contact's ID number? Would that affect the data integrity or create any sort of issues with the notes earlier than 2008? I hope I've accurately described my scenario, as I am not familiar with the proper technical terms. I really appreciate the help sir!

    Read the article

  • Finding duplicate values in a SQL table

    - by Alex
    It's easy to find duplicates with one field SELECT name, COUNT(email) FROM users GROUP BY email HAVING ( COUNT(email) > 1 ) So if we have a table ID NAME EMAIL 1 John [email protected] 2 Sam [email protected] 3 Tom [email protected] 4 Bob [email protected] 5 Tom [email protected] This query will give us John, Sam, Tom, Tom because they all have the same e-mails. But what I want, is to get duplicates with the same e-mails and names. I want to get Tom, Tom. I made a mistake, and allowed to insert duplicate name and e-mail values. Now I need to remove/change the duplicates. But I need to find them first.

    Read the article

  • How to prevent GetOleDbSchemaTable from returning duplicate sheet names from Excel workbook

    - by Richard Bysouth
    Hi I have a function to return a DataView containing info on sheets in an Excel Workbook, as follows: Public Function GetSchemaInfo() As DataView Using connection As New OleDbConnection(GetConnectionString()) connection.Open() Dim schemaTable As DataTable = connection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, Nothing) connection.Close() Return New DataView(schemaTable) End Using End Function This works fine except that if the workbook has linked data (i.e. pulls its data from another workbook), duplicate sheet names are returned. For example, Workbook1 has a single worksheet, Sheet1. I get 2 rows in the DataView, with the TABLE_NAME field being "Sheet1$" and "Sheet1$_". OK, I could use a RowFilter, but wondered whether there was a better way or why this extra row is returned? thanks Richard

    Read the article

  • sql-server: how to select from dupilcate rows from table?

    - by RedsDevils
    Hi All, I have the following table. CREATE TABLE TEST(ID TINYINT NULL, COL1 CHAR(1)) INSERT INTO TEST(ID,COL1) VALUES (1,'A') INSERT INTO TEST(ID,COL1) VALUES (2,'B') INSERT INTO TEST(ID,COL1) VALUES (1,'A') INSERT INTO TEST(ID,COL1) VALUES (1,'B') INSERT INTO TEST(ID,COL1) VALUES (1,'B') INSERT INTO TEST(ID,COL1) VALUES (2,'B') I would like to select duplicate rows from that table. How Can I select? I try like the following: SELECT TEST.ID,TEST.COL1 FROM TEST WHERE TEST.ID IN (SELECT ID FROM TEST WHERE TEST.COL1 IN (SELECT COL1 FROM TEST WHERE TEST.ID IN (SELECT ID FROM TEST GROUP BY ID HAVING COUNT(*) > 1) GROUP BY COL1 HAVING COUNT(*) > 1) GROUP BY ID HAVING COUNT(*) > 1) Where's the Error? Can you modify that? Help me! Thanks in advance!

    Read the article

  • mysql retrieve duplicate entry

    - by Cypher
    I have a huge text file that I'm importing into a mysql database. The four important tables that I'm using are -- schools (id,name) variables (id, name) var_entries (id, var_id, data) with unique( var_id, data) data (id, school_id, var_ent_id) I have a file that I'm importing into this database. I want every variable to have a unique set of values (hence the unique call) but I also then want to associate the appropriate var_entrie with a school in data. So my question is is there a way to get the appropriate var_ent_id when you a duplicate var_entry is added? I know I can do this with storing the values and using a strcmp but it'd be messy.

    Read the article

  • Finding duplicate values in a SQL table - ADVANCED

    - by Alex
    It's easy to find duplicates with one field SELECT name, COUNT(email) FROM users GROUP BY email HAVING ( COUNT(email) > 1 ) So if we have a table ID NAME EMAIL 1 John [email protected] 2 Sam [email protected] 3 Tom [email protected] 4 Bob [email protected] 5 Tom [email protected] This query will give us John, Sam, Tom, Tom because they all have the same e-mails. But what I want, is to get duplicates with the same e-mails and names. I want to get Tom, Tom. I made a mistake, and allowed to insert duplicate name and e-mail values. Now I need to remove/change the duplicates. But I need to find them first.

    Read the article

  • merging two duplicate contacts/ColdFusion

    - by jil
    having to do with data integrity - I maintain a coldfusion database at a small shop that keeps addresses of different contacts. These contacts sometimes contain notes in them. When you are merging two duplicate contacts, one may be created in 2002 and one in 2008. If the contact in 2002 has notes prior to 2008, my question would be does it matter if you merge these contacts and keep the 2008 contact's ID number? Would that affect the data integrity or create any sort of issues with the notes earlier than 2008? I hope I've accurately described my scenario, as I am not familiar with the proper technical terms. I really appreciate the help sir!

    Read the article

  • Duplicate all rows in sql database table

    - by Andrew Welch
    I have a table which contains house details called property. I am creating a localised application, and I have a db table called propertylocalised. In this table is held duplicates of the data and culture column e.g. key culture propertyname 1 en helloproperty 1 fr bonjourproperty At the moment I have all my en culture inserted but I want to duplicate all of those rows and then for every other row insert fr into culture. I obviously only want to do this once, for the purpose of setting up the localisation. Thanks Andy

    Read the article

  • Handling duplicate nodes in XML

    - by JYelton
    Scenario: I am parsing values from an XML file using C# and have the following method: private static string GetXMLNodeValue(XmlNode basenode, string strNodePath) { if (basenode.SelectSingleNode(strNodePath) != null) return (basenode.SelectSingleNode(strNodePath).InnerText); else return String.Empty; } To get a particular value from the XML file, I generally pass the root node and a path like "parentnode/item" I recently ran into an issue where two nodes at the same document level share the same name. Question: What is the best way to get the values for duplicate-named nodes distinctly? My thought was to load all values matching the node name into an array and then using the array index to refer to them. I'm not sure how to implement that, though. (I'm not well-versed in XML navigation.)

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >