Search Results

Search found 5233 results on 210 pages for 'a records'.

Page 47/210 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Good conventions for embedding schema of a flat file

    - by Ville Koskinen
    We receive lots of data as flat files: delimitted or just fixed length records. It's sometimes hard to find out what the files actually contain. Are there any well established practices for embedding the schema of the file to the beginning or the end of a file to make the file self-explanatory? Just to get an idea, imagine something like this: <data name=test records=2 type=fixed> <field name=foo start=0 length=2 type=numeric> <field name=bar start=2 length=4 type=text> </data> 11test 12ing We would parse the xml in the beginning and use it for reading the records.

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • How Do I Pull Info from String

    - by Russ Bradberry
    I am trying to pull dynamics from a load that I run using bash. I have gotten to a point where I get the string I want, now from this I want to pull certain information that can vary. The string that gets returned is as follows: Records: 2910 Deleted: 0 Skipped: 0 Warnings: 0 Each of the number can and will vary in length, but the overall structure will remain the same. What I want to do is be able to get these numbers and load them into some bash variables ie: RECORDS=?? DELETED=?? SKIPPED=?? WARNING=?? In regex I would do it like this: Records: (\d*?) Deleted: (\d*?) Skipped (\d*?) Warnings (\d*?) and use the 4 groups in my variables.

    Read the article

  • Parameter passing Vs Table Valued Parameters Vs XML to SQL 2008 from .Net Application

    - by Harryboy
    As We are working on a asp .net project there three ways one can update data into database when there are multiple rows updation / insertion required Let's assume we need to update employee education detail (which could be 1,3,5 or 10 records) Method to Update Data Pass value as parameter (Traditional approach), If 10 records are there then 10 round trip required Pass data as xml and write logic inside your stored procedure to get that data from xml and update the table (only single roundtrip required) Use Table valued parameters (only single roundtrip required) Note : Data is available as List, so i need to convert it to xml or any other format if i need to pass. There are no. of places in entire application we need to update data in bulk (or multiple records) I just need your suggestions that Which method will be faster (please mention if there are some other overheads) Manageability or testability concern with any approach Any other bottleneck or issue with any of the approach (Serialization /Deserialization concern or limit on size of the data passing) Any other method you suggest for same operations Thanks

    Read the article

  • mysql left outer join

    - by tirso
    hi to all I have two tables employee and timecard, employee table has fields employee_id,firstname,middlename,lastname and timecard table has fields employee_id,time-in,time-out,tc_date_transaction. I want to select all employee records which have the same employee_id with timecard and date is equal with the current date. If there are no records equal with the current date then return also the records of employee even without time-in,timeout and tc_date_transaction. I have query like this SELECT * FROM employee LEFT OUTER JOIN timecard ON employee.employee_id = timecard.employee_id WHERE tc_date_transaction = "17/06/2010"; result should like this: employee_id,firstname, middlename, lastname,time-in,time-out,tc_date_transaction 1,john,t,cruz,08:00,05:00,17/06/2010 2,mary,j,von,null,null,null any help would greatly appreciated Thanks in advance

    Read the article

  • Using a "vo" for joined data?

    - by keithjgrant
    I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table). I've been using basic vo-type models for each table so far. Because of this table structure structure, though, transactions will almost always be selected with a join on the batch table. So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data? There are a few cases in which batch records and/or transaction records. Are there possible pitfalls down the road if I have "overlapping" vo classes?

    Read the article

  • problem parsing JSON Strings

    - by blacktooth
    var records = JSON.parse(JsonString); for(var x=0;x<records.result.length;x++) { var record = records.result[x]; ht_text+="<b><p>"+(x+1)+" " +record.EMPID+" " +record.LOCNAME+" " +record.DEPTNAME+" " +record.CUSTNAME +"<br/><br/><div class='slide'>" +record.REPORT +"</div></p></b><br/>"; } The above code works fine when the JsonString contains an array of entities but fails for single entity. result is not identified as an array! Whats wrong with it? http://pastebin.com/hgyWw5hd

    Read the article

  • adding custom fields dynamically to a model

    - by pankajbhageria
    I have a model called List which has many records: class List has_many :records end class Record end The table Record has 2 permanent fields: name, email. Besides these 2 fields, for each List a Record can have 'n' custom fields. For example: for list1 I add address(text), dob(date) as custom fields. Then while adding records to list one, each record can have values for address and dob. Is there any ActiveRecord plugin which provides this type of functionality? Or else could you share your thoughts on how to model this? Thanks in advance, Pankaj

    Read the article

  • Python MySQLdb LOAD LOCAL INFILE problems

    - by belvoir
    The problem is a simple one. When I execute the following I get different results depending on whether I run it from the MySQL console and from inside a Python Script using MySQLdb: LOAD DATA LOCAL INFILE '/tmp/source.csv' INTO TABLE test FIELDS TERMINATED BY '|' IGNORE 1 LINES; Console gives the following results: Records: 35002 Deleted: 0 Skipped: 0 Warnings: 0 Python (via .info()) returns the following: Records: 34977 Deleted: 0 Skipped: 0 Warnings: 8 So in summary, same source file, same SQL request, different results. From the console I can 'SHOW WARNINGS' an get a better handle on which records are causing the problems and why but from Python I can't idenitify how to do this or more importantly what the cause of the problem could be. Any suggestions? MySQL Server '5.1.41-3ubuntu12.1' Python '2.6.5' Tables are MyISAM

    Read the article

  • how to have separate keys per record in mongo_mapper + Rails

    - by Vitaly Kushner
    When I'm adding a record in mongodb I can specify whatever keys I want and it will store it in the db. The problem is that it will remember those keys for the next time I insert another record. so for example if I do the following: Product.create :foo => 123 and then Product.create :bar => 456 I get :foo => nil field in the 2nd record. This is definitely not a limitation of mongodb itself, since if I restart the rails console and create yet another record with different set of columns, it will not add the columns from the 1st 2 records. So it seems like mongomapper remembers all the keys used and inserts them all into all records, even if values are not provided. The question is obviously: how do I disable this crazy attributes explosion? Basically I want only the 'permanent' keys that I specify in the model to be in every record, but all the 'extra' attributes to be specified per record and not to mess the consequent records.

    Read the article

  • Most optimal order (of joins) for left join

    - by Ram
    I have 3 tables Table1 (with 1020690 records), Table2(with 289425 records), Table 3(with 83692 records).I have something like this SELECT * FROM Table1 T1 /* OK fine select * is bad when not all columns are needed, this is just an example*/ LEFT JOIN Table2 T2 ON T1.id=T2.id LEFT JOIN Table3 T3 ON T1.id=T3.id and a query like this SELECT * FROM Table1 T1 LEFT JOIN Table3 T3 ON T1.id=T3.id LEFT JOIN Table2 T2 ON T1.id=T2.id The query plan shows me that it uses 2 Merge Join for both the joins. For the first query, the first merge is with T1 and T2 and then with T3. For the second query, the first merge is with T1 and T3 and then with T2. Both these queries take about the same time(40 seconds approx.) or sometimes Query1 takes couple of seconds longer. So my question is, does the join order matter ?

    Read the article

  • Perl: calculating a delta of years from a date

    - by Spiros
    Hello, I am trying to figure out a way to calculate the year of birth for records when given the age to two decimals at a given date - in Perl. To illustrate this example consider these two records: date, age at date 25 Nov 2005, 74.23 21 Jan 2007, 75.38 What I want to do is get the year of birth based on those records - it should be, in theory, consistent. The problem is that when I try to derive it by calculating the difference between the year in the date field minus the age, I run into rounding errors making the results look wrong while they are in fact correct. I have tried using some "clever" combination of int() or sprintf() to round things up but to not avail. I have looked at Date::Calc but cant see something I can use. p.s. As many dates are pre-1970, I cannot not unfortunately use UNIX epoch for this.

    Read the article

  • Scrolling a Div programatically using say Javascript

    - by SARAVAN
    Hi I have a jqgrid which is embedded in a Div. I am deleting records from the grid and reloading the grid using grid.Trigger('reload'). The width of the grid is considerably high so it has a scroll bar. Now I scrolled through the end of the grid horizontally before deleting records. Each time I delete the records and reload the grid, the column headers and their values are slightly misaligned. When I move the scroll bar back to original position or just move the scroll bar slightly they are aligned properly. So I thought its better to move the scroll bar to its inital position when the grid reloads. How can a scroll bar be programatically moved using javascript. Or is there any other way to solve my problem?

    Read the article

  • Calculating a delta of years from a date

    - by Spiros
    I am trying to figure out a way to calculate the year of birth for records when given the age to two decimals at a given date - in Perl. To illustrate this example consider these two records: date, age at date 25 Nov 2005, 74.23 21 Jan 2007, 75.38 What I want to do is get the year of birth based on those records - it should be, in theory, consistent. The problem is that when I try to derive it by calculating the difference between the year in the date field minus the age, I run into rounding errors making the results look wrong while they are in fact correct. I have tried using some "clever" combination of int() or sprintf() to round things up but to not avail. I have looked at Date::Calc but cant see something I can use. p.s. As many dates are pre-1970, I cannot not unfortunately use UNIX epoch for this.

    Read the article

  • How to order results based on number of search term matches?

    - by Travis
    I am using the following tables in mysql to describe records that can have multiple searchtags associated with them: TABLE records ID title desc TABLE searchTags ID name TABLE recordSearchTags recordID searchTagID To SELECT records based on arbitrary search input, I have a statement that looks sort of like this: SELECT recordID FROM recordSearchTags LEFT JOIN searchTags ON recordSearchTags.searchTagID = searchTags.ID WHERE searchTags.name LIKE CONCAT('%','$search1','%') OR searchTags.name LIKE CONCAT('%','$search2','%') OR searchTags.name LIKE CONCAT('%','$search3','%') OR searchTags.name LIKE CONCAT('%','$search4','%'); I'd like to ORDER this resultset, so that rows that match with more search terms are displayed in front of rows that match with fewer search terms. For example, if a row matches all 4 search terms, it will be top of the list. A row that matches only 2 search terms will be somewhere in the middle. And a row that matches just one search term will be at the end. Any suggestions on what is the best way to do this? Thanks!

    Read the article

  • Oracle SQL Update query takes days to update

    - by B Senthil Kumar
    I am trying to update a record in the target table based on the record coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. I find that the Informatica code is perfectly fine looking at the Informatica session log but its in the update it takes long time (more than 5 days to update one million records). Any suggestions as to what can be done on the scenario to improve the performance?

    Read the article

  • JSF session issue

    - by user234194
    I have got a situation where I have list of records say 10,000, I am using datatable and I am using paging,(10 records per display). I wanted to put put that list in the session as: facesContext........put("mylist", mylist); And in the getters of the mylist, I have public List<MyClass> getMyList() { if(mylist== null){ mylist= (List<MyClass>) FacesContext......getSessionMap().get("mylist"); } return mylist; } Now the problem is whene ever i click on paging button to go to second page, only the first records are displayed, I know i am missing some thing, and I have few questions: Is the way of putting the list in session correct. Is this the way I should be calling the list in my case. Thnaks in advance...

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • MapReduce results seem limited to 100?

    - by user1813867
    I'm playing around with Map Reduce in MongoDB and python and I've run into a strange limitation. I'm just trying to count the number of "book" records. It works when there are less than 100 records but when it goes over 100 records the count resets for some reason. Here is my MR code and some sample outputs: var M = function () { book = this.book; emit(book, {count : 1}); } var R = function (key, values) { var sum = 0; values.forEach(function(x) { sum += 1; }); var result = { count : sum }; return result; } MR output when record count is 99: {u'_id': u'superiors', u'value': {u'count': 99}} MR output when record count is 101: {u'_id': u'superiors', u'value': {u'count': 2.0}} Any ideas?

    Read the article

  • PHP scope question

    - by Dan
    Hi, I'm trying to look through an array of records (staff members), in this loop, I call a function which returns another array of records (appointments for each staff member). foreach($staffmembers as $staffmember) { $staffmember['appointments'] = get_staffmember_appointments_for_day($staffmember); // print_r($staffmember['appointments'] works fine } This is working OK, however, later on in the script, I need to loop through the records again, this time making use of the appointment arrays, however they are unavailable. foreach ($staffmembers as $staffmember) { //do some other stuff //print_r($staffmember['appointments'] no longer does anything } Normally, I would perform the function from the first loop, within the second, however this loop is already nested within two others, which would cause the same sql query to be run 168 times. Can anyone suggest a workaround? Any advice would be greatly appreciated. Thanks

    Read the article

  • Strange sql query result from Mysql and from PHP mysqli_query!

    - by qinHaiXiang
    this is the query command echo from php web page: SELECT DISTINCT FT.file_type_name AS type,FT.file_type_en AS tp,FT.file_type_id AS fti, MATCH(keywords) AGAINST ('words <2' IN BOOLEAN MODE ) AS score FROM movie AS M,file_type AS FT WHERE MATCH (keywords) AGAINST ('words <2' IN BOOLEAN MODE ) AND M.type_cn = FT.file_type_id HAVING score >=1 ORDER BY FT.file_type_order; I am running above query in MySQL tools HeidiSQL and got only tow row records which score are 1.66666 and 2. If I remove the HAVING clause I would got three row records with one's score less than 1. But the same query I get from PHP mysqli_query() were all the three records and the one which score less than 1 became 1. What is the problem. Any tips will be pleasure. Thank you very much!!

    Read the article

  • UIPickerView and empty core data array

    - by Mark
    I have a viewcontroller showing items from a core data entity. I also have a tableview listing records from the same entity. The table is editable, and the user could remove all the records. When this happens, the view controller holding the pickerview bombs because it's looking for records in an empty array. How to prevent this? I'm assuming I need to do something different at objectAtIndex:row... # pragma mark PickerView Section - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { return 1; // returns the number of columns to display. } - (NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { return [profiles count]; // returns the number of rows } - (NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { // Display the profiles we've fetched on the picker Profiles *prof = [profiles objectAtIndex:row]; return prof.profilename; } //If the user chooses from the pickerview - (void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { selectedProfile = [[profiles objectAtIndex:row]valueForKey:@"profilename"]; }

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >