Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 16/210 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • c# - pull records from database without timeout

    - by BhejaFry
    Hi folks, i have a sql query with multiple joins & it pulls data from a database for processing. This is supposed to be running on some scheduled basis. So day 1, it might pull 500, day 2 say 400. Now, if the service is stopped for some reason & the data not processed, then on day3 there could be as much as 1000 records to process. This is causing timeout on the sql query. How best to handle this situation without causing timeout & gradually reducing workload to process? TIA

    Read the article

  • Insert multiple records from a XML string differing on one parameter in SQL SERVER 2008

    - by Rohit
    Below in a query which inserts records to SimpleDictationProfileMapping table after reading it from a XML string. Now this query inserts a single record in which DictationCaptureProfileID is @dictationCaptureProfileId . Now i want to insert multiple rows in which @dictationCaptureProfileId is different and other 2 values are same. What i want to achieve by this is in case parent changes all child values should also change. INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID, DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId, row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes('/simpleMappingAtribute/attribute') AS d ( row ) ; I want INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID OR (SELECT DictationCaptureProfileID FROM DictationCaptureProfile WHERE SystemDictationCaptureProfileID = @systemDictationCaptureProfileID), DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId , row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes ('/simpleMappingAtribute/attribute') AS d ( row ) ; Please tell how to achieve this.

    Read the article

  • mysql - filtering a list against keywords, both list and keywords > 20 million records

    - by threecheeseopera
    I have two tables, both having more than 20 million records; table1 is a list of terms, and table2 is a list of keywords that may or may not appear in those terms. I need to identify the terms that contain a keyword. My current strategy is: SELECT table1.term, table2.keyword FROM table1 INNER JOIN table2 ON table1.term LIKE CONCAT('%', table2.keyword, '%'); This is not working, it takes f o r e v e r. It's not the server (see notes). How might I rewrite this so that it runs in under a day? Notes: As for server optimization: both tables are myisam and have unique indexes on the matching fields; the myisam key buffer is greater than the sum of both index file sizes, and it is not even being fully taxed (key_blocks_unused is ... large); the server is a dual-xeon 2U beast with fast sas drives and 8G of ram, fine-tuned for the mysql workload.

    Read the article

  • sql server 2005 - return single row when 2 records in right table

    - by Peanut
    Hi, I have two related sql server tables ... TableA and TableB. ***TableA - Columns*** TableA_ID INT VALUE VARCHAR(100) ***TableB - Columns*** TableB_ID INT TableA_ID INT VALUE VARCHAR(100) For every single record in TableA there are always 2 records in TableB. Therefore TableA has a one-to-many relationship with TableB. How could I write a single sql statement to join these tables and return a single row for each row in TableA that includes: a column for the VALUE column in the first related row in table B a column for the VALUE column in the second related row in table B? Thanks.

    Read the article

  • Reading records from Excel PivotCache

    - by hcpremium
    I have an Excel workbook which contains a PivotCache I would like to use as a data source. var file = @"Foo.xls"; var excel = new Excel.Application(); var workbook = excel.Workbooks.Open(file); Excel.PivotCache cache = null; foreach (Excel.PivotCache pivotCache in workbook.PivotCaches()) { if (...) { cache = pivotCache; } } var records = cache.Recordset; The last command throws an exception (Exception from HRESULT: 0x800A03EC). How can I access the PivotCache? I tried it thru Ole DB first, but no success...

    Read the article

  • Rails/mysql SUM distinct records - optimization

    - by pepernik
    Hey. How would you optimize this SQL SELECT SUM(tmp.cost) FROM ( SELECT DISTINCT clients.id as client, countries.credits_cost AS cost FROM countries INNER JOIN clients ON clients.country_id = countries.id INNER JOIN clients_groups ON clients_groups.client_id=clients.id WHERE clients_groups.group_id IN (1,2,3,4,5,6,7,8,9) GROUP BY clients.id ) AS tmp; I'm using this example as part of my Ruby on Rails project. Note that my nested SQL (tmp) can have more then 10 milion records. You can split that in more SQLs if the performance is better. Should I add any indexes to make it quicker (i have it on IDs)?

    Read the article

  • Ruby on Rails updating join table records

    - by Eef
    Hey, I have two models Users and Roles. I have setup a many to many relationship between the two models and I have a joint table called roles_users. I have a form on a page with a list of roles which the user checks a checkbox and it posts to the controller which then updates the roles_users table. At the moment in my update method I am doing this because I am not sure of a better way: role_ids = params[:role_ids] user.roles.clear role_ids.each do |role| user.roles << Role.find(role) end unless role_ids.nil? So I am clearing all the entries out then looping threw all the role ids sent from the form via post, I also noticed that if all the checkboxes are checked and the form posted it keeps adding duplicate records, could anyone give some advice on a more efficent way of doing this?

    Read the article

  • Filter records based on Date Range + ASP.NET + Regex + Javascript

    - by ASIF
    Hi I need to filter data based on a date range. My table has a field Process date. I need to filter the records and display those in the range FromDate to ToDate. How do I write a function in VB.NET which can help me filter the data. Protected Shared Function ObjectInRange(ByRef obj As Object, ByVal str1 As String, ByVal str2 As String) As Boolean Dim inRange = False For Each prop As PropertyInfo In obj.GetType().GetProperties() Dim propVal = prop.GetValue(obj, Nothing) If propVal Is Nothing Then Continue For End If Dim propValString = Convert.ToString(propVal) If Regex....WHAT GOES HERE? Then inRange = True Exit For End If Next Return inRange End Function Am I on the right track??

    Read the article

  • PostgreSQL: keep a certain number of records in a table

    - by Alexander Farber
    Hello, I have an SQL-table holding the last hands received by a player in card game. The hand is represented by an integer (32 bits == 32 cards): create table pref_hand ( id varchar(32) references pref_users, hand integer not NULL check (hand > 0), stamp timestamp default current_timestamp ); As the players are playing constantly and that data isn't important (just a gimmick to be displayed at player profile pages) and I don't want my database to grow too quickly, I'd like to keep only up to 10 records per player id. So I'm trying to declare this PL/PgSQL procedure: create or replace function pref_update_game(_id varchar, _hand integer) returns void as $BODY$ begin delete from pref_hand offset 10 where id=_id order by stamp; insert into pref_hand (id, hand) values (_id, _hand); end; $BODY$ language plpgsql; but unfortunately this fails with: ERROR: syntax error at or near "offset" because delete doesn't support offset. Does anybody please have a better idea here? Thank you! Alex

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ?

    Read the article

  • Dreamweaver recordset filter - Display all records as default

    - by Drew
    I am trying to create a simple search form to filter the results in the dynamic table. The search form is on the same pages as the results and posts to itself. I get the search string from the post variable. It is working, but I can't figure out how to set the default value to display all results. Dreamweaver automatically sets the default value to -1, and therefore no results are displayed on the initial load. How do I change this to display ALL records as default and the filter only if there is search string defined.

    Read the article

  • Get records based on child condition

    - by Shawn Mclean
    In LINQ To Entities: How do I get the records (including both child and parent) based on a condition of the child in a one to many. My structure is set up as follows: GetResources() - returns a list of Resources. GetResources().ResourceNames - this is the child, which is an entity collection. GetResources().ResourceNames - a property of one record of this child is Name. I'd like to construct something like this: return (from p in repository.GetResources() where p.ResourceNames.Exist(r => r.Name.Contains(text, StringComparison.CurrentCultureIgnoreCase)) select p).ToList(); but of course, Exist doesn't exist. thanks.

    Read the article

  • How to search for duplicate values in a huge text file having around Half Million records

    - by Shibu
    I have an input txt file which has data in the form of records (each row is a record and represents more or less like a DB table) and I need to find for duplicate values. For example: Rec1: ACCOUNT_NBR_1*NAME_1*VALUE_1 Rec2: ACCOUNT_NBR_2*NAME_2*VALUE_2 Rec3: ACCOUNT_NBR_1*NAME_3*VALUE_3 In the above set, the Rec1 and Rec2 are considered to be duplicates as the ACCOUNT NUMBERS are same(ACCOUNT_NBR1). Note: The input file shown above is a delimiter type file (the delimiter being *) however the file type can also be a fixed length file where each column starts and ends with a specified positions. I am currently doing this with the following logic: Loop thru each ACCOUNT NUMBER Loop thru each line of the txt file and record and check if this is repeated. If repeated record the same in a hashtable. End End And I am using 'Pattern' & 'BufferedReader' java API's to perform the above task. But since it is taking a long time, I would like to know a better way of handling it. Thanks, Shibu

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

  • Storing the records in csv file from datatable.

    - by Harikrishna
    I have datatable and I am displaying those values in the datagridview with the helping of code : dataGridView1.ColumnCount = TableWithOnlyFixedColumns.Columns.Count; dataGridView1.RowCount = TableWithOnlyFixedColumns.Rows.Count; for (int i = 0; i < dataGridView1.RowCount; i++) { for (int j = 0; j < dataGridView1.ColumnCount; j++) { dataGridView1[j, i].Value = TableWithOnlyFixedColumns.Rows[i][j].ToString(); } } TableExtractedFromFile.Clear(); TableWithOnlyFixedColumns.Clear(); Now I want to save the records in the datatable in csv file.How can I do that ?

    Read the article

  • PHP: Doctrine: order joined records

    - by Sebastian Bechtel
    Hi, I'm new to doctrine: I have a problem with the sorting of joined records. A sample. I've got an Article model which is associated with a Source model in 1 <- n. The source model has a property called 'position' with an integer value. Now I want to fetch an article with it's sources orderes by the position. My DQL looks like this: $q = Doctrine_Query::create() ->select('a.title, s.content') ->from('Article a') ->leftJoin('a.Source s') ->where('a.id = ?') ->orderBy('s.position'); The result doesn't change if I edit the position. Best regards, Sebastian

    Read the article

  • Return no records if FIndKey results in False?

    - by jwilfong
    Using TDataSet.FindKey you can locate records. When it results in True the datasets cursor will be positioned on the found record. When it results in False the cursor is not moved. This results in the record data prior to FindKey being issued being displayed in data aware components. How can I code the result of FindKey to return an empty record? if Not tblSomeTable.FindKey([SomeSearchData]) then begin < code to return empty or move data cursor to neutral position > end; Thanks, John

    Read the article

  • get n records at a time from a temporary table

    - by Claudiu
    I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like: CREATE TEMP TABLE tmptmp AS SELECT ##autonumber somehow##, id FROM .... --complicated query then I can do: SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000 etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • manipulating 15+ million records in mysql with php?

    - by Nithish
    Hey, I got a user table containing 15+ million records and while doing the registration function i wish to check whether the username already exist. I did indexing for username column and when i run the query "select count(uid) from users where username='webdev'" ,. hmmm, its keep on loading blank screen finally hanged up. I'm doing this in my localhost with php 5 & mysql 5. So suggest me some technique to handle this situation. Is that mongodb is good alternative for handling this process in our local machine? Thanks, Nithish.

    Read the article

  • Get top 'n' records by report_id

    - by Skudd
    I have a simple view in my MSSQL database. It consists of the following fields: report_id INT ym VARCHAR -- YYYY-MM keyword VARCHAR(MAX) visits INT I can easily get the top 10 keyword hits with the following query: SELECT TOP 10 * FROM top_keywords WHERE ym BETWEEN '2010-05' AND '2010-05' ORDER BY visits DESC Now where it gets tricky is where I have to get the top 10 records for each report_id in the given date range (ym BETWEEN @start_date AND @end_date). How would I go about getting the top 10 for each report_id? I've stumbled across suggestions involving the use of ROW_NUMBER() and RANK(), but have been vastly unsuccessful in their implementation.

    Read the article

  • SQL Server 2008 Delete Records from Self-Referencing Table in correct order

    - by KTrace
    I need to delete a sub set of records from a self-referencing table in SQL Server 2008. I am trying to do the following but it is does not like the order by. WITH SelfReferencingTable (ID, depth) AS ( SELECT id, 0 as [depth] FROM dbo.Table WHERE parentItemID IS NULL AND [t].ColumnA = '123' UNION ALL SELECT [t].ID, [srt].[depth] + 1 FROM dbo.Table t INNER JOIN SelfReferencingTable srt ON [t].parentItemID = [srt].id WHERE [t].ColumnA = '123' ) DELETE y FROM dbo.Table y JOIN SelfReferencingTable x on x.ID = y.id ORDER BY x.depth DESC Any ideas why this isn't working?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >