Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 89/575 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records?

    Read the article

  • batch update SharePoint data

    - by Faiz
    Hi, I need to import user profile information to a sharepoint list. Also, the user profile information will be periodically updated to the list , therefore i think i should use a timer job to update the information. Importing for the first time works without any issues. But can you suggest what can be the best way to update information. The update should take care of updating existing records in profile , mark as orphaned for the records that are deleted from user profile and add new records in user profile to the list. Thanks

    Read the article

  • Problem with Efficient Gridview paging without datasource control

    - by Ronnie Overby
    I am trying to do efficient paging with a gridview without using a datasource control. By efficient, I mean I only retrieve the records that I intend to show. I am trying to use the PagerTemplate to build my pager functionality. In short, the problem is that if I bind only the records that I intend to show on the current page, the gridview doesn't render its pager template, so I don't get the paging controls. It's almost as if I MUST bind more records than I intend to show on a given page, which is not something I want to do.

    Read the article

  • One large file or multiple small files?

    - by Dan
    I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly. My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs? I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions. Thanks very much in advance for your time, Dan

    Read the article

  • How to send to view $data and $data2 at same time?

    - by artmania
    Hi friends, I use codeigniter and issue about how to send to view $data and $data_cat2 at same time? $data['records'] = $this->gallery_model->get_all(1); $data_cat2['records'] = $this->gallery_model->get_all(2); $this->load->view('gallery/index.php',$data); I tried to put an another load-view for $data_cat2. and it just repeated the all page :/ Thanks regards!!! appreciate helps! sorry for silly question, just sorted it! feel shame to ask that: $data['records'] = $this->gallery_model->get_all(1); $data['records_cat2'] = $this->gallery_model->get_all(2); and using the record_cat2 in view file.

    Read the article

  • "SELECT TOP", "LEFT OUTER JOIN", "ORDER BY" gives extra rows

    - by Codesleuth
    I have the following Access query I'm running through OLE DB in .NET: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblRegion.Region There are 431 records within tblClient that have RegionCode set to NULL. For some reason, the query above returns all these 431 records instead of the first 25. If I change the query to ORDER BY tblClient.Client (the name of the client) like so: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblClient.Client I get the expected result set of 25 records, showing a mixture of region names and NULL values. Why is it that ordering by a field retrieved through a LEFT OUTER JOIN will the TOP clause not work?

    Read the article

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • I need a groovy criteria to get all the elements after i make sort on nullable inner object

    - by user1773876
    I have two domain classes named IpPatient,Ward as shown below. class IpPatient { String ipRegNo Ward ward static constraints = { ward nullable:true ipRegNo nullable:false } } class Ward { String name; static constraints = { name nullable:true } } now i would like to create criteria like def criteria=IpPatient.createCriteria() return criteria.list(max:max , offset:offset) { order("ward.name","asc") createAlias('ward', 'ward', CriteriaSpecification.LEFT_JOIN) } At present IpPatient table has 13 records, where 8 records of IpPatient doesn't have ward because ward can be null. when i sort with wardName i am getting 5 records those contain ward. I need a criteria to get all the elements after i make sort on nullable inner object.

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • Getting past Salesforce trigger governors

    - by Jake
    I'm trying to write an "after update" trigger that does a batch update on all child records of the record that has just been updated. This needs to be able to handle 15k+ child records at a time. Unfortunately, the limit appears to be 100, which is so far below my needs it's not even close to acceptable. I haven't tried splitting the records into batches of 100 each, since this will still put me at a cap of 10k updates per trigger execution. (Maybe I could just daisy-chain triggers together? ugh.) Does anyone know what series of hoops I can jump through to overcome yet another ridiculously short-sighted limitation by this awful development "platform"?

    Read the article

  • SQL query for getting data in two fields from one column.

    - by AmiT
    I have a table [Tbl1] containing two fields. ID as int And TextValue as nvarchar(max) Suppose there are 7 records. I need a resultset that has two columns Text1 and Text2. The Text1 should have first 4 records and Text2 should have remaining 3 records. [Tbl1] ID | TextValue 1. | Apple 2. | Mango 3. | Orange 4. | Pineapple 5. | Banana 6. | Grapes 7. | Sapota Now, the result-set should have Text1 | Text2 Apple | Banana Mango | Grapes Orange | Sapota Pineapple |

    Read the article

  • Good conventions for embedding schema of a flat file

    - by Ville Koskinen
    We receive lots of data as flat files: delimitted or just fixed length records. It's sometimes hard to find out what the files actually contain. Are there any well established practices for embedding the schema of the file to the beginning or the end of a file to make the file self-explanatory? Just to get an idea, imagine something like this: <data name=test records=2 type=fixed> <field name=foo start=0 length=2 type=numeric> <field name=bar start=2 length=4 type=text> </data> 11test 12ing We would parse the xml in the beginning and use it for reading the records.

    Read the article

  • How to insert bulk data into mysql table from asp.net at once.

    - by kranthi
    Hi, I have a requirement that I need to read an excel sheet using asp.net/C# and insert all the records into mysql table.The excel sheet consists of around 2000 rows and 50 columns. Currently,upon reading the excel records ,I am inserting the records one by one using a prepare statement into mysql table.But its taking around 70 secs to do so because of the huge data. I've also thought of creating a new datarow, assigning values to each cell,adding the resulting datarow to datatable and finally calling dataadapter.update(...).But it seems to be complex because I got around 50 columns and hence I'll have to assign 50 values to the datarow. Could someone please suggest if there is an alternate to improve the performance of the insertion? Thanks

    Read the article

  • Query joining in sql server 2005

    - by Domnic
    I have two queries like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT FROM GLAS_PDC_CHEQUES WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE ORDER BY PC_SL_ACNO -------------------------------------------------- SELECT COAD_PTY_FULL_NAME,PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO, PC_DEPT_NO, PC_DOC_TYPE, PC_CHEQUE_NO, PC_BANK_AC_NO FROM GLAS_PTY_ADDRESS,GLAS_SBLGR_MASTERS,GLAS_PDC_CHEQUES WHERE COAD_COMP_CODE = '1' AND SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND PC_COMP_CODE=SLMA_COMP_CODE AND SLMA_ACNO = PC_SL_ACNO AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) If I execute first query alone I get 5 records... If I join the above two query like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT, COAD_PTY_FULL_NAME FROM GLAS_PDC_CHEQUES LEFT OUTER JOIN GLAS_SBLGR_MASTERS ON( SLMA_COMP_CODE=PC_COMP_CODE AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND SLMA_ACNO = PC_SL_ACNO ) LEFT OUTER JOIN GLAS_PTY_ADDRESS ON( SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID) WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE,COAD_PTY_FULL_NAME ORDER BY PC_SL_ACNO then I just get 2 records.... I need that 5 records to display after join..... How can I do it?

    Read the article

  • How Do I Pull Info from String

    - by Russ Bradberry
    I am trying to pull dynamics from a load that I run using bash. I have gotten to a point where I get the string I want, now from this I want to pull certain information that can vary. The string that gets returned is as follows: Records: 2910 Deleted: 0 Skipped: 0 Warnings: 0 Each of the number can and will vary in length, but the overall structure will remain the same. What I want to do is be able to get these numbers and load them into some bash variables ie: RECORDS=?? DELETED=?? SKIPPED=?? WARNING=?? In regex I would do it like this: Records: (\d*?) Deleted: (\d*?) Skipped (\d*?) Warnings (\d*?) and use the 4 groups in my variables.

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • Parameter passing Vs Table Valued Parameters Vs XML to SQL 2008 from .Net Application

    - by Harryboy
    As We are working on a asp .net project there three ways one can update data into database when there are multiple rows updation / insertion required Let's assume we need to update employee education detail (which could be 1,3,5 or 10 records) Method to Update Data Pass value as parameter (Traditional approach), If 10 records are there then 10 round trip required Pass data as xml and write logic inside your stored procedure to get that data from xml and update the table (only single roundtrip required) Use Table valued parameters (only single roundtrip required) Note : Data is available as List, so i need to convert it to xml or any other format if i need to pass. There are no. of places in entire application we need to update data in bulk (or multiple records) I just need your suggestions that Which method will be faster (please mention if there are some other overheads) Manageability or testability concern with any approach Any other bottleneck or issue with any of the approach (Serialization /Deserialization concern or limit on size of the data passing) Any other method you suggest for same operations Thanks

    Read the article

  • Using a "vo" for joined data?

    - by keithjgrant
    I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table). I've been using basic vo-type models for each table so far. Because of this table structure structure, though, transactions will almost always be selected with a join on the batch table. So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data? There are a few cases in which batch records and/or transaction records. Are there possible pitfalls down the road if I have "overlapping" vo classes?

    Read the article

  • mysql left outer join

    - by tirso
    hi to all I have two tables employee and timecard, employee table has fields employee_id,firstname,middlename,lastname and timecard table has fields employee_id,time-in,time-out,tc_date_transaction. I want to select all employee records which have the same employee_id with timecard and date is equal with the current date. If there are no records equal with the current date then return also the records of employee even without time-in,timeout and tc_date_transaction. I have query like this SELECT * FROM employee LEFT OUTER JOIN timecard ON employee.employee_id = timecard.employee_id WHERE tc_date_transaction = "17/06/2010"; result should like this: employee_id,firstname, middlename, lastname,time-in,time-out,tc_date_transaction 1,john,t,cruz,08:00,05:00,17/06/2010 2,mary,j,von,null,null,null any help would greatly appreciated Thanks in advance

    Read the article

  • Python MySQLdb LOAD LOCAL INFILE problems

    - by belvoir
    The problem is a simple one. When I execute the following I get different results depending on whether I run it from the MySQL console and from inside a Python Script using MySQLdb: LOAD DATA LOCAL INFILE '/tmp/source.csv' INTO TABLE test FIELDS TERMINATED BY '|' IGNORE 1 LINES; Console gives the following results: Records: 35002 Deleted: 0 Skipped: 0 Warnings: 0 Python (via .info()) returns the following: Records: 34977 Deleted: 0 Skipped: 0 Warnings: 8 So in summary, same source file, same SQL request, different results. From the console I can 'SHOW WARNINGS' an get a better handle on which records are causing the problems and why but from Python I can't idenitify how to do this or more importantly what the cause of the problem could be. Any suggestions? MySQL Server '5.1.41-3ubuntu12.1' Python '2.6.5' Tables are MyISAM

    Read the article

  • problem parsing JSON Strings

    - by blacktooth
    var records = JSON.parse(JsonString); for(var x=0;x<records.result.length;x++) { var record = records.result[x]; ht_text+="<b><p>"+(x+1)+" " +record.EMPID+" " +record.LOCNAME+" " +record.DEPTNAME+" " +record.CUSTNAME +"<br/><br/><div class='slide'>" +record.REPORT +"</div></p></b><br/>"; } The above code works fine when the JsonString contains an array of entities but fails for single entity. result is not identified as an array! Whats wrong with it? http://pastebin.com/hgyWw5hd

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >