Search Results

Search found 17140 results on 686 pages for 'records management'.

Page 61/686 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • iPhone Development - Query related records using CoreData

    - by Mustafa
    I have a case where i have three entities with one-to-many and one-to-many relationships: Entity A (Entity B relationhip), Entity B (Entity A relationship, Entity C relationship), Entity C (Entity B relationhip) I have the reference of Entity A, and now i want to fetch all the related Entity C records. How can i do that? (with least amount of code) Edit: Here's another way to put it. Can we perform joins with CoreData. For example, (and this is a very crude example), We have a following entity-relationship: Grand Parent (1)---(m) Parent Parent (1)---(m) Child So, now if i have "Albert" the Grand Parent, and i want to get all his grand children, how can i do that?

    Read the article

  • insert many records using ADO

    - by Salvador
    i am looking the fastest way to insert many records at once (+1000) to an table using ADO. option 1) using insert commands and parameters ADODataSet1.CommandText:='INSERT INTO .....'; ADODataSet1.Parameters.CreateParameter('myparam',ftString,pdInput,12,''); ADODataSet1.Open; option 2) using TAdoTable AdoTable1.Insert; AdoTable1.FieldByName('myfield').Value:=myvale; //.. //.. //.. AdoTable1.FieldByName('myfieldN').value:=myvalueN; AdoTable1.Post; option 3) any suggestions? i am using delphi 7, ADO and ORACLE. thanks in advance.

    Read the article

  • How to find N Consecutive records in a table using SQL

    - by user320587
    Hi, I have the following Table definition with sample data. In the following table, Customer Product & Date are key fields Table One Customer Product Date SALE X A 01/01/2010 YES X A 02/01/2010 YES X A 03/01/2010 NO X A 04/01/2010 NO X A 05/01/2010 YES X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO X A 09/01/2010 YES X A 10/01/2010 YES X A 11/01/2010 NO X A 12/01/2010 YES In the above table, I need to find the N or N consecutive records where there was no sale, Sale value was 'NO' For example, if N is 2, the the result set would return the following Customer Product Date SALE X A 03/01/2010 NO X A 04/01/2010 NO X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO Can someone help me with a SQL query to get the desired results. I am using SQL Server 2005. I started playing using ROW_NUMBER() AND PARTITION clauses but no luck. Thanks for any help

    Read the article

  • Limit a user to view only associated records in rails

    - by trobrock
    I have an application with three Models (Profile - SubModel - SubSubModel) chained together with has many relationships. I am trying to limit a user, after logging in, to only retrieving records that are associated with their Profile. I am very new to rails and this is what I had been trying in the Profile model has_many :submodels, :conditions => {:profile_id => self.id} but this is returning an empty data set when calling with Profile.find_by_id(1).submodels, how else could I achieve what I am trying to do. Or should I handle this in the controller or view instead, I thought it sounded well suited for the model to handle this.

    Read the article

  • Select Statement to show missing records (Easy Question)

    - by Gerhard Weiss
    I need some T-SQL that will show missing records. Here is some sample data: Emp 1 01/01/2010 02/01/2010 04/01/2010 06/01/2010 Emp 2 02/01/2010 04/01/2010 05/01/2010 etc... I need to know Emp 1 is missing 03/01/2010 05/01/2010 Emp 2 is missing 01/01/2010 03/01/2010 06/01/2010 The range to check will start with todays date and go back 6 months. In this example, lets say today's date is 06/12/2010 so the range is going to be 01/01/2010 thru 06/01/2010. The day is always going to be the 1st in the data. Thanks a bunch. :) Gerhard Weiss Secretary of Great Lakes Area .NET Users Group GANG Upcoming Meetings | GANG LinkedIn Group

    Read the article

  • keep duplicate number records only - perl

    - by manu
    Hello I have one text string which is having some duplicate characters (FFGGHHJKL), these can be made unique by using the positive lookahead [perl script for the same$ perl -pe 's/(.)(?=.*?\1)//g']. (FFEEDDCCGG OUTPUT == FEDCG) My question is how to make it work on the numbers (Ex. 212 212 43 43 5689 6689 5689 71 81 === output should be 212 43 5689 6689 71 81) ? Also if we want to have only duplicate records to be given as the output from a file having n rows (212 212 43 43 5689 6689 5689 71 81 \n 66 66 67 68 69 69 69 71 71 52 ..\n .. .. \n... OUTPUT == 212 212 43 43 5689 5689 \n 66 66 69 69 69 71 71) then what should be done ? Thanks and regards -manu

    Read the article

  • Cross-reference between delphi records

    - by Paul-Jan
    Let's say I have a record TQuaternion and a record TVector. Quaternions have some methods with TVector parameters. On the other hand, TVector supports some operations that have TQuaternion parameters. Knowing that Delphi (Win32) does not allow for forward record declarations, how do I solve this elegantly? Using classes is not really an option here, because I really want to use operator overloading for this rare case where it actually makes good sense. For now I simply moved these particular methods out of the records and into separate functions, the good old-fashioned way. Better suggestions are most welcome.

    Read the article

  • Get records from Access table

    - by chianta
    On Access 2010 I need to use VBA to get the records in a table, process them and put them in a new table. Could you tell me how can I do? Is there a way similar to C # to put everything into a datatable the result of a query? I found an example on how to get the data. http://pastebin.com/bCtg20jp But it always fails on the first statement "ADODB.Recordset". I went to see the included libraries and library that uses ADODB is already included "Microsoft Access 14.0 Object Library". Thanks

    Read the article

  • Comparing 2 mysql tables and updating only records that have changed

    - by Roland
    I have the following. Two mysql tables. I want to copy info that has changed from table a to b. For example if row 1 column 2 has changed in table a I want to only update that column in table b. Table b is not the same as a but has same columns that also exist in a. The other solution I have is to just clear table b and replace it with the contents from table a, the problem with this could be that the script will take longer to execute, since there are more than 10000 records. Any advise for which method would work the best will be highly appreciated

    Read the article

  • magic records being deleted

    - by chris
    i have customised oscommerce to pull in a csv file of products, delete anything thats not with an image/proper description/proper title gets removed. The import runs on a cron job basis pulling information from a supplier, it hasnt run since yesterday but a product has disappeared- Anyone who has used oscommerce will know that, product information is stored over multiple tables. example is- products product_description and so on. the thing that has got me that the information is deleted from the product table but not from the product_description table. The product that is being deleted is a manually input one which carries a special tag/prefix on the model item of the product table. Therefore shouldn't be touched at all. Am clueles as what is going on. Is there mysql integrity checks deleting records? could there be another plugin working on oscommerce?

    Read the article

  • mysql to get depth of record, count parent and ancestor records

    - by Nate
    Hey All, Say I have a post table containing the fields post_id and parent_post_id. I want to return every record in the post table with a count of the "depth" of the post. By depth, I mean, how many parent and ancestor records exist. Take this data for example... post_id parent_post_id ------- -------------- 1 null 2 1 3 1 4 2 5 4 The data represents this hierarchy... 1 |_ 2 | |_ 4 | |_ 5 |_ 3 The result of the query should be... post_id depth ------- ----- 1 0 2 1 3 1 4 2 5 3 Thanks in advance!

    Read the article

  • c# - pull records from database without timeout

    - by BhejaFry
    Hi folks, i have a sql query with multiple joins & it pulls data from a database for processing. This is supposed to be running on some scheduled basis. So day 1, it might pull 500, day 2 say 400. Now, if the service is stopped for some reason & the data not processed, then on day3 there could be as much as 1000 records to process. This is causing timeout on the sql query. How best to handle this situation without causing timeout & gradually reducing workload to process? TIA

    Read the article

  • Insert multiple records from a XML string differing on one parameter in SQL SERVER 2008

    - by Rohit
    Below in a query which inserts records to SimpleDictationProfileMapping table after reading it from a XML string. Now this query inserts a single record in which DictationCaptureProfileID is @dictationCaptureProfileId . Now i want to insert multiple rows in which @dictationCaptureProfileId is different and other 2 values are same. What i want to achieve by this is in case parent changes all child values should also change. INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID, DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId, row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes('/simpleMappingAtribute/attribute') AS d ( row ) ; I want INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID OR (SELECT DictationCaptureProfileID FROM DictationCaptureProfile WHERE SystemDictationCaptureProfileID = @systemDictationCaptureProfileID), DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId , row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes ('/simpleMappingAtribute/attribute') AS d ( row ) ; Please tell how to achieve this.

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • sql server 2005 - return single row when 2 records in right table

    - by Peanut
    Hi, I have two related sql server tables ... TableA and TableB. ***TableA - Columns*** TableA_ID INT VALUE VARCHAR(100) ***TableB - Columns*** TableB_ID INT TableA_ID INT VALUE VARCHAR(100) For every single record in TableA there are always 2 records in TableB. Therefore TableA has a one-to-many relationship with TableB. How could I write a single sql statement to join these tables and return a single row for each row in TableA that includes: a column for the VALUE column in the first related row in table B a column for the VALUE column in the second related row in table B? Thanks.

    Read the article

  • Reading records from Excel PivotCache

    - by hcpremium
    I have an Excel workbook which contains a PivotCache I would like to use as a data source. var file = @"Foo.xls"; var excel = new Excel.Application(); var workbook = excel.Workbooks.Open(file); Excel.PivotCache cache = null; foreach (Excel.PivotCache pivotCache in workbook.PivotCaches()) { if (...) { cache = pivotCache; } } var records = cache.Recordset; The last command throws an exception (Exception from HRESULT: 0x800A03EC). How can I access the PivotCache? I tried it thru Ole DB first, but no success...

    Read the article

  • Rails/mysql SUM distinct records - optimization

    - by pepernik
    Hey. How would you optimize this SQL SELECT SUM(tmp.cost) FROM ( SELECT DISTINCT clients.id as client, countries.credits_cost AS cost FROM countries INNER JOIN clients ON clients.country_id = countries.id INNER JOIN clients_groups ON clients_groups.client_id=clients.id WHERE clients_groups.group_id IN (1,2,3,4,5,6,7,8,9) GROUP BY clients.id ) AS tmp; I'm using this example as part of my Ruby on Rails project. Note that my nested SQL (tmp) can have more then 10 milion records. You can split that in more SQLs if the performance is better. Should I add any indexes to make it quicker (i have it on IDs)?

    Read the article

  • Ruby on Rails updating join table records

    - by Eef
    Hey, I have two models Users and Roles. I have setup a many to many relationship between the two models and I have a joint table called roles_users. I have a form on a page with a list of roles which the user checks a checkbox and it posts to the controller which then updates the roles_users table. At the moment in my update method I am doing this because I am not sure of a better way: role_ids = params[:role_ids] user.roles.clear role_ids.each do |role| user.roles << Role.find(role) end unless role_ids.nil? So I am clearing all the entries out then looping threw all the role ids sent from the form via post, I also noticed that if all the checkboxes are checked and the form posted it keeps adding duplicate records, could anyone give some advice on a more efficent way of doing this?

    Read the article

  • Filter records based on Date Range + ASP.NET + Regex + Javascript

    - by ASIF
    Hi I need to filter data based on a date range. My table has a field Process date. I need to filter the records and display those in the range FromDate to ToDate. How do I write a function in VB.NET which can help me filter the data. Protected Shared Function ObjectInRange(ByRef obj As Object, ByVal str1 As String, ByVal str2 As String) As Boolean Dim inRange = False For Each prop As PropertyInfo In obj.GetType().GetProperties() Dim propVal = prop.GetValue(obj, Nothing) If propVal Is Nothing Then Continue For End If Dim propValString = Convert.ToString(propVal) If Regex....WHAT GOES HERE? Then inRange = True Exit For End If Next Return inRange End Function Am I on the right track??

    Read the article

  • PostgreSQL: keep a certain number of records in a table

    - by Alexander Farber
    Hello, I have an SQL-table holding the last hands received by a player in card game. The hand is represented by an integer (32 bits == 32 cards): create table pref_hand ( id varchar(32) references pref_users, hand integer not NULL check (hand > 0), stamp timestamp default current_timestamp ); As the players are playing constantly and that data isn't important (just a gimmick to be displayed at player profile pages) and I don't want my database to grow too quickly, I'd like to keep only up to 10 records per player id. So I'm trying to declare this PL/PgSQL procedure: create or replace function pref_update_game(_id varchar, _hand integer) returns void as $BODY$ begin delete from pref_hand offset 10 where id=_id order by stamp; insert into pref_hand (id, hand) values (_id, _hand); end; $BODY$ language plpgsql; but unfortunately this fails with: ERROR: syntax error at or near "offset" because delete doesn't support offset. Does anybody please have a better idea here? Thank you! Alex

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ?

    Read the article

  • Get records based on child condition

    - by Shawn Mclean
    In LINQ To Entities: How do I get the records (including both child and parent) based on a condition of the child in a one to many. My structure is set up as follows: GetResources() - returns a list of Resources. GetResources().ResourceNames - this is the child, which is an entity collection. GetResources().ResourceNames - a property of one record of this child is Name. I'd like to construct something like this: return (from p in repository.GetResources() where p.ResourceNames.Exist(r => r.Name.Contains(text, StringComparison.CurrentCultureIgnoreCase)) select p).ToList(); but of course, Exist doesn't exist. thanks.

    Read the article

  • How to search for duplicate values in a huge text file having around Half Million records

    - by Shibu
    I have an input txt file which has data in the form of records (each row is a record and represents more or less like a DB table) and I need to find for duplicate values. For example: Rec1: ACCOUNT_NBR_1*NAME_1*VALUE_1 Rec2: ACCOUNT_NBR_2*NAME_2*VALUE_2 Rec3: ACCOUNT_NBR_1*NAME_3*VALUE_3 In the above set, the Rec1 and Rec2 are considered to be duplicates as the ACCOUNT NUMBERS are same(ACCOUNT_NBR1). Note: The input file shown above is a delimiter type file (the delimiter being *) however the file type can also be a fixed length file where each column starts and ends with a specified positions. I am currently doing this with the following logic: Loop thru each ACCOUNT NUMBER Loop thru each line of the txt file and record and check if this is repeated. If repeated record the same in a hashtable. End End And I am using 'Pattern' & 'BufferedReader' java API's to perform the above task. But since it is taking a long time, I would like to know a better way of handling it. Thanks, Shibu

    Read the article

  • Dreamweaver recordset filter - Display all records as default

    - by Drew
    I am trying to create a simple search form to filter the results in the dynamic table. The search form is on the same pages as the results and posts to itself. I get the search string from the post variable. It is working, but I can't figure out how to set the default value to display all results. Dreamweaver automatically sets the default value to -1, and therefore no results are displayed on the initial load. How do I change this to display ALL records as default and the filter only if there is search string defined.

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >