Search Results

Search found 6017 results on 241 pages for 'universal records managem'.

Page 55/241 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • SQL query for getting data in two fields from one column.

    - by AmiT
    I have a table [Tbl1] containing two fields. ID as int And TextValue as nvarchar(max) Suppose there are 7 records. I need a resultset that has two columns Text1 and Text2. The Text1 should have first 4 records and Text2 should have remaining 3 records. [Tbl1] ID | TextValue 1. | Apple 2. | Mango 3. | Orange 4. | Pineapple 5. | Banana 6. | Grapes 7. | Sapota Now, the result-set should have Text1 | Text2 Apple | Banana Mango | Grapes Orange | Sapota Pineapple |

    Read the article

  • Query joining in sql server 2005

    - by Domnic
    I have two queries like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT FROM GLAS_PDC_CHEQUES WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE ORDER BY PC_SL_ACNO -------------------------------------------------- SELECT COAD_PTY_FULL_NAME,PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO, PC_DEPT_NO, PC_DOC_TYPE, PC_CHEQUE_NO, PC_BANK_AC_NO FROM GLAS_PTY_ADDRESS,GLAS_SBLGR_MASTERS,GLAS_PDC_CHEQUES WHERE COAD_COMP_CODE = '1' AND SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND PC_COMP_CODE=SLMA_COMP_CODE AND SLMA_ACNO = PC_SL_ACNO AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) If I execute first query alone I get 5 records... If I join the above two query like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT, COAD_PTY_FULL_NAME FROM GLAS_PDC_CHEQUES LEFT OUTER JOIN GLAS_SBLGR_MASTERS ON( SLMA_COMP_CODE=PC_COMP_CODE AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND SLMA_ACNO = PC_SL_ACNO ) LEFT OUTER JOIN GLAS_PTY_ADDRESS ON( SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID) WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE,COAD_PTY_FULL_NAME ORDER BY PC_SL_ACNO then I just get 2 records.... I need that 5 records to display after join..... How can I do it?

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • How to insert bulk data into mysql table from asp.net at once.

    - by kranthi
    Hi, I have a requirement that I need to read an excel sheet using asp.net/C# and insert all the records into mysql table.The excel sheet consists of around 2000 rows and 50 columns. Currently,upon reading the excel records ,I am inserting the records one by one using a prepare statement into mysql table.But its taking around 70 secs to do so because of the huge data. I've also thought of creating a new datarow, assigning values to each cell,adding the resulting datarow to datatable and finally calling dataadapter.update(...).But it seems to be complex because I got around 50 columns and hence I'll have to assign 50 values to the datarow. Could someone please suggest if there is an alternate to improve the performance of the insertion? Thanks

    Read the article

  • Good conventions for embedding schema of a flat file

    - by Ville Koskinen
    We receive lots of data as flat files: delimitted or just fixed length records. It's sometimes hard to find out what the files actually contain. Are there any well established practices for embedding the schema of the file to the beginning or the end of a file to make the file self-explanatory? Just to get an idea, imagine something like this: <data name=test records=2 type=fixed> <field name=foo start=0 length=2 type=numeric> <field name=bar start=2 length=4 type=text> </data> 11test 12ing We would parse the xml in the beginning and use it for reading the records.

    Read the article

  • How Do I Pull Info from String

    - by Russ Bradberry
    I am trying to pull dynamics from a load that I run using bash. I have gotten to a point where I get the string I want, now from this I want to pull certain information that can vary. The string that gets returned is as follows: Records: 2910 Deleted: 0 Skipped: 0 Warnings: 0 Each of the number can and will vary in length, but the overall structure will remain the same. What I want to do is be able to get these numbers and load them into some bash variables ie: RECORDS=?? DELETED=?? SKIPPED=?? WARNING=?? In regex I would do it like this: Records: (\d*?) Deleted: (\d*?) Skipped (\d*?) Warnings (\d*?) and use the 4 groups in my variables.

    Read the article

  • mysql left outer join

    - by tirso
    hi to all I have two tables employee and timecard, employee table has fields employee_id,firstname,middlename,lastname and timecard table has fields employee_id,time-in,time-out,tc_date_transaction. I want to select all employee records which have the same employee_id with timecard and date is equal with the current date. If there are no records equal with the current date then return also the records of employee even without time-in,timeout and tc_date_transaction. I have query like this SELECT * FROM employee LEFT OUTER JOIN timecard ON employee.employee_id = timecard.employee_id WHERE tc_date_transaction = "17/06/2010"; result should like this: employee_id,firstname, middlename, lastname,time-in,time-out,tc_date_transaction 1,john,t,cruz,08:00,05:00,17/06/2010 2,mary,j,von,null,null,null any help would greatly appreciated Thanks in advance

    Read the article

  • problem parsing JSON Strings

    - by blacktooth
    var records = JSON.parse(JsonString); for(var x=0;x<records.result.length;x++) { var record = records.result[x]; ht_text+="<b><p>"+(x+1)+" " +record.EMPID+" " +record.LOCNAME+" " +record.DEPTNAME+" " +record.CUSTNAME +"<br/><br/><div class='slide'>" +record.REPORT +"</div></p></b><br/>"; } The above code works fine when the JsonString contains an array of entities but fails for single entity. result is not identified as an array! Whats wrong with it? http://pastebin.com/hgyWw5hd

    Read the article

  • Parameter passing Vs Table Valued Parameters Vs XML to SQL 2008 from .Net Application

    - by Harryboy
    As We are working on a asp .net project there three ways one can update data into database when there are multiple rows updation / insertion required Let's assume we need to update employee education detail (which could be 1,3,5 or 10 records) Method to Update Data Pass value as parameter (Traditional approach), If 10 records are there then 10 round trip required Pass data as xml and write logic inside your stored procedure to get that data from xml and update the table (only single roundtrip required) Use Table valued parameters (only single roundtrip required) Note : Data is available as List, so i need to convert it to xml or any other format if i need to pass. There are no. of places in entire application we need to update data in bulk (or multiple records) I just need your suggestions that Which method will be faster (please mention if there are some other overheads) Manageability or testability concern with any approach Any other bottleneck or issue with any of the approach (Serialization /Deserialization concern or limit on size of the data passing) Any other method you suggest for same operations Thanks

    Read the article

  • Using a "vo" for joined data?

    - by keithjgrant
    I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table). I've been using basic vo-type models for each table so far. Because of this table structure structure, though, transactions will almost always be selected with a join on the batch table. So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data? There are a few cases in which batch records and/or transaction records. Are there possible pitfalls down the road if I have "overlapping" vo classes?

    Read the article

  • Python MySQLdb LOAD LOCAL INFILE problems

    - by belvoir
    The problem is a simple one. When I execute the following I get different results depending on whether I run it from the MySQL console and from inside a Python Script using MySQLdb: LOAD DATA LOCAL INFILE '/tmp/source.csv' INTO TABLE test FIELDS TERMINATED BY '|' IGNORE 1 LINES; Console gives the following results: Records: 35002 Deleted: 0 Skipped: 0 Warnings: 0 Python (via .info()) returns the following: Records: 34977 Deleted: 0 Skipped: 0 Warnings: 8 So in summary, same source file, same SQL request, different results. From the console I can 'SHOW WARNINGS' an get a better handle on which records are causing the problems and why but from Python I can't idenitify how to do this or more importantly what the cause of the problem could be. Any suggestions? MySQL Server '5.1.41-3ubuntu12.1' Python '2.6.5' Tables are MyISAM

    Read the article

  • adding custom fields dynamically to a model

    - by pankajbhageria
    I have a model called List which has many records: class List has_many :records end class Record end The table Record has 2 permanent fields: name, email. Besides these 2 fields, for each List a Record can have 'n' custom fields. For example: for list1 I add address(text), dob(date) as custom fields. Then while adding records to list one, each record can have values for address and dob. Is there any ActiveRecord plugin which provides this type of functionality? Or else could you share your thoughts on how to model this? Thanks in advance, Pankaj

    Read the article

  • Most optimal order (of joins) for left join

    - by Ram
    I have 3 tables Table1 (with 1020690 records), Table2(with 289425 records), Table 3(with 83692 records).I have something like this SELECT * FROM Table1 T1 /* OK fine select * is bad when not all columns are needed, this is just an example*/ LEFT JOIN Table2 T2 ON T1.id=T2.id LEFT JOIN Table3 T3 ON T1.id=T3.id and a query like this SELECT * FROM Table1 T1 LEFT JOIN Table3 T3 ON T1.id=T3.id LEFT JOIN Table2 T2 ON T1.id=T2.id The query plan shows me that it uses 2 Merge Join for both the joins. For the first query, the first merge is with T1 and T2 and then with T3. For the second query, the first merge is with T1 and T3 and then with T2. Both these queries take about the same time(40 seconds approx.) or sometimes Query1 takes couple of seconds longer. So my question is, does the join order matter ?

    Read the article

  • how to have separate keys per record in mongo_mapper + Rails

    - by Vitaly Kushner
    When I'm adding a record in mongodb I can specify whatever keys I want and it will store it in the db. The problem is that it will remember those keys for the next time I insert another record. so for example if I do the following: Product.create :foo => 123 and then Product.create :bar => 456 I get :foo => nil field in the 2nd record. This is definitely not a limitation of mongodb itself, since if I restart the rails console and create yet another record with different set of columns, it will not add the columns from the 1st 2 records. So it seems like mongomapper remembers all the keys used and inserts them all into all records, even if values are not provided. The question is obviously: how do I disable this crazy attributes explosion? Basically I want only the 'permanent' keys that I specify in the model to be in every record, but all the 'extra' attributes to be specified per record and not to mess the consequent records.

    Read the article

  • Perl: calculating a delta of years from a date

    - by Spiros
    Hello, I am trying to figure out a way to calculate the year of birth for records when given the age to two decimals at a given date - in Perl. To illustrate this example consider these two records: date, age at date 25 Nov 2005, 74.23 21 Jan 2007, 75.38 What I want to do is get the year of birth based on those records - it should be, in theory, consistent. The problem is that when I try to derive it by calculating the difference between the year in the date field minus the age, I run into rounding errors making the results look wrong while they are in fact correct. I have tried using some "clever" combination of int() or sprintf() to round things up but to not avail. I have looked at Date::Calc but cant see something I can use. p.s. As many dates are pre-1970, I cannot not unfortunately use UNIX epoch for this.

    Read the article

  • Scrolling a Div programatically using say Javascript

    - by SARAVAN
    Hi I have a jqgrid which is embedded in a Div. I am deleting records from the grid and reloading the grid using grid.Trigger('reload'). The width of the grid is considerably high so it has a scroll bar. Now I scrolled through the end of the grid horizontally before deleting records. Each time I delete the records and reload the grid, the column headers and their values are slightly misaligned. When I move the scroll bar back to original position or just move the scroll bar slightly they are aligned properly. So I thought its better to move the scroll bar to its inital position when the grid reloads. How can a scroll bar be programatically moved using javascript. Or is there any other way to solve my problem?

    Read the article

  • Calculating a delta of years from a date

    - by Spiros
    I am trying to figure out a way to calculate the year of birth for records when given the age to two decimals at a given date - in Perl. To illustrate this example consider these two records: date, age at date 25 Nov 2005, 74.23 21 Jan 2007, 75.38 What I want to do is get the year of birth based on those records - it should be, in theory, consistent. The problem is that when I try to derive it by calculating the difference between the year in the date field minus the age, I run into rounding errors making the results look wrong while they are in fact correct. I have tried using some "clever" combination of int() or sprintf() to round things up but to not avail. I have looked at Date::Calc but cant see something I can use. p.s. As many dates are pre-1970, I cannot not unfortunately use UNIX epoch for this.

    Read the article

  • How to order results based on number of search term matches?

    - by Travis
    I am using the following tables in mysql to describe records that can have multiple searchtags associated with them: TABLE records ID title desc TABLE searchTags ID name TABLE recordSearchTags recordID searchTagID To SELECT records based on arbitrary search input, I have a statement that looks sort of like this: SELECT recordID FROM recordSearchTags LEFT JOIN searchTags ON recordSearchTags.searchTagID = searchTags.ID WHERE searchTags.name LIKE CONCAT('%','$search1','%') OR searchTags.name LIKE CONCAT('%','$search2','%') OR searchTags.name LIKE CONCAT('%','$search3','%') OR searchTags.name LIKE CONCAT('%','$search4','%'); I'd like to ORDER this resultset, so that rows that match with more search terms are displayed in front of rows that match with fewer search terms. For example, if a row matches all 4 search terms, it will be top of the list. A row that matches only 2 search terms will be somewhere in the middle. And a row that matches just one search term will be at the end. Any suggestions on what is the best way to do this? Thanks!

    Read the article

  • Oracle SQL Update query takes days to update

    - by B Senthil Kumar
    I am trying to update a record in the target table based on the record coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. I find that the Informatica code is perfectly fine looking at the Informatica session log but its in the update it takes long time (more than 5 days to update one million records). Any suggestions as to what can be done on the scenario to improve the performance?

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • JSF session issue

    - by user234194
    I have got a situation where I have list of records say 10,000, I am using datatable and I am using paging,(10 records per display). I wanted to put put that list in the session as: facesContext........put("mylist", mylist); And in the getters of the mylist, I have public List<MyClass> getMyList() { if(mylist== null){ mylist= (List<MyClass>) FacesContext......getSessionMap().get("mylist"); } return mylist; } Now the problem is whene ever i click on paging button to go to second page, only the first records are displayed, I know i am missing some thing, and I have few questions: Is the way of putting the list in session correct. Is this the way I should be calling the list in my case. Thnaks in advance...

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • MapReduce results seem limited to 100?

    - by user1813867
    I'm playing around with Map Reduce in MongoDB and python and I've run into a strange limitation. I'm just trying to count the number of "book" records. It works when there are less than 100 records but when it goes over 100 records the count resets for some reason. Here is my MR code and some sample outputs: var M = function () { book = this.book; emit(book, {count : 1}); } var R = function (key, values) { var sum = 0; values.forEach(function(x) { sum += 1; }); var result = { count : sum }; return result; } MR output when record count is 99: {u'_id': u'superiors', u'value': {u'count': 99}} MR output when record count is 101: {u'_id': u'superiors', u'value': {u'count': 2.0}} Any ideas?

    Read the article

  • PHP scope question

    - by Dan
    Hi, I'm trying to look through an array of records (staff members), in this loop, I call a function which returns another array of records (appointments for each staff member). foreach($staffmembers as $staffmember) { $staffmember['appointments'] = get_staffmember_appointments_for_day($staffmember); // print_r($staffmember['appointments'] works fine } This is working OK, however, later on in the script, I need to loop through the records again, this time making use of the appointment arrays, however they are unavailable. foreach ($staffmembers as $staffmember) { //do some other stuff //print_r($staffmember['appointments'] no longer does anything } Normally, I would perform the function from the first loop, within the second, however this loop is already nested within two others, which would cause the same sql query to be run 168 times. Can anyone suggest a workaround? Any advice would be greatly appreciated. Thanks

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >