Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 90/575 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • adding custom fields dynamically to a model

    - by pankajbhageria
    I have a model called List which has many records: class List has_many :records end class Record end The table Record has 2 permanent fields: name, email. Besides these 2 fields, for each List a Record can have 'n' custom fields. For example: for list1 I add address(text), dob(date) as custom fields. Then while adding records to list one, each record can have values for address and dob. Is there any ActiveRecord plugin which provides this type of functionality? Or else could you share your thoughts on how to model this? Thanks in advance, Pankaj

    Read the article

  • how to have separate keys per record in mongo_mapper + Rails

    - by Vitaly Kushner
    When I'm adding a record in mongodb I can specify whatever keys I want and it will store it in the db. The problem is that it will remember those keys for the next time I insert another record. so for example if I do the following: Product.create :foo => 123 and then Product.create :bar => 456 I get :foo => nil field in the 2nd record. This is definitely not a limitation of mongodb itself, since if I restart the rails console and create yet another record with different set of columns, it will not add the columns from the 1st 2 records. So it seems like mongomapper remembers all the keys used and inserts them all into all records, even if values are not provided. The question is obviously: how do I disable this crazy attributes explosion? Basically I want only the 'permanent' keys that I specify in the model to be in every record, but all the 'extra' attributes to be specified per record and not to mess the consequent records.

    Read the article

  • Perl: calculating a delta of years from a date

    - by Spiros
    Hello, I am trying to figure out a way to calculate the year of birth for records when given the age to two decimals at a given date - in Perl. To illustrate this example consider these two records: date, age at date 25 Nov 2005, 74.23 21 Jan 2007, 75.38 What I want to do is get the year of birth based on those records - it should be, in theory, consistent. The problem is that when I try to derive it by calculating the difference between the year in the date field minus the age, I run into rounding errors making the results look wrong while they are in fact correct. I have tried using some "clever" combination of int() or sprintf() to round things up but to not avail. I have looked at Date::Calc but cant see something I can use. p.s. As many dates are pre-1970, I cannot not unfortunately use UNIX epoch for this.

    Read the article

  • Most optimal order (of joins) for left join

    - by Ram
    I have 3 tables Table1 (with 1020690 records), Table2(with 289425 records), Table 3(with 83692 records).I have something like this SELECT * FROM Table1 T1 /* OK fine select * is bad when not all columns are needed, this is just an example*/ LEFT JOIN Table2 T2 ON T1.id=T2.id LEFT JOIN Table3 T3 ON T1.id=T3.id and a query like this SELECT * FROM Table1 T1 LEFT JOIN Table3 T3 ON T1.id=T3.id LEFT JOIN Table2 T2 ON T1.id=T2.id The query plan shows me that it uses 2 Merge Join for both the joins. For the first query, the first merge is with T1 and T2 and then with T3. For the second query, the first merge is with T1 and T3 and then with T2. Both these queries take about the same time(40 seconds approx.) or sometimes Query1 takes couple of seconds longer. So my question is, does the join order matter ?

    Read the article

  • Scrolling a Div programatically using say Javascript

    - by SARAVAN
    Hi I have a jqgrid which is embedded in a Div. I am deleting records from the grid and reloading the grid using grid.Trigger('reload'). The width of the grid is considerably high so it has a scroll bar. Now I scrolled through the end of the grid horizontally before deleting records. Each time I delete the records and reload the grid, the column headers and their values are slightly misaligned. When I move the scroll bar back to original position or just move the scroll bar slightly they are aligned properly. So I thought its better to move the scroll bar to its inital position when the grid reloads. How can a scroll bar be programatically moved using javascript. Or is there any other way to solve my problem?

    Read the article

  • How to order results based on number of search term matches?

    - by Travis
    I am using the following tables in mysql to describe records that can have multiple searchtags associated with them: TABLE records ID title desc TABLE searchTags ID name TABLE recordSearchTags recordID searchTagID To SELECT records based on arbitrary search input, I have a statement that looks sort of like this: SELECT recordID FROM recordSearchTags LEFT JOIN searchTags ON recordSearchTags.searchTagID = searchTags.ID WHERE searchTags.name LIKE CONCAT('%','$search1','%') OR searchTags.name LIKE CONCAT('%','$search2','%') OR searchTags.name LIKE CONCAT('%','$search3','%') OR searchTags.name LIKE CONCAT('%','$search4','%'); I'd like to ORDER this resultset, so that rows that match with more search terms are displayed in front of rows that match with fewer search terms. For example, if a row matches all 4 search terms, it will be top of the list. A row that matches only 2 search terms will be somewhere in the middle. And a row that matches just one search term will be at the end. Any suggestions on what is the best way to do this? Thanks!

    Read the article

  • Calculating a delta of years from a date

    - by Spiros
    I am trying to figure out a way to calculate the year of birth for records when given the age to two decimals at a given date - in Perl. To illustrate this example consider these two records: date, age at date 25 Nov 2005, 74.23 21 Jan 2007, 75.38 What I want to do is get the year of birth based on those records - it should be, in theory, consistent. The problem is that when I try to derive it by calculating the difference between the year in the date field minus the age, I run into rounding errors making the results look wrong while they are in fact correct. I have tried using some "clever" combination of int() or sprintf() to round things up but to not avail. I have looked at Date::Calc but cant see something I can use. p.s. As many dates are pre-1970, I cannot not unfortunately use UNIX epoch for this.

    Read the article

  • Oracle SQL Update query takes days to update

    - by B Senthil Kumar
    I am trying to update a record in the target table based on the record coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. I find that the Informatica code is perfectly fine looking at the Informatica session log but its in the update it takes long time (more than 5 days to update one million records). Any suggestions as to what can be done on the scenario to improve the performance?

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • JSF session issue

    - by user234194
    I have got a situation where I have list of records say 10,000, I am using datatable and I am using paging,(10 records per display). I wanted to put put that list in the session as: facesContext........put("mylist", mylist); And in the getters of the mylist, I have public List<MyClass> getMyList() { if(mylist== null){ mylist= (List<MyClass>) FacesContext......getSessionMap().get("mylist"); } return mylist; } Now the problem is whene ever i click on paging button to go to second page, only the first records are displayed, I know i am missing some thing, and I have few questions: Is the way of putting the list in session correct. Is this the way I should be calling the list in my case. Thnaks in advance...

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • MapReduce results seem limited to 100?

    - by user1813867
    I'm playing around with Map Reduce in MongoDB and python and I've run into a strange limitation. I'm just trying to count the number of "book" records. It works when there are less than 100 records but when it goes over 100 records the count resets for some reason. Here is my MR code and some sample outputs: var M = function () { book = this.book; emit(book, {count : 1}); } var R = function (key, values) { var sum = 0; values.forEach(function(x) { sum += 1; }); var result = { count : sum }; return result; } MR output when record count is 99: {u'_id': u'superiors', u'value': {u'count': 99}} MR output when record count is 101: {u'_id': u'superiors', u'value': {u'count': 2.0}} Any ideas?

    Read the article

  • PHP scope question

    - by Dan
    Hi, I'm trying to look through an array of records (staff members), in this loop, I call a function which returns another array of records (appointments for each staff member). foreach($staffmembers as $staffmember) { $staffmember['appointments'] = get_staffmember_appointments_for_day($staffmember); // print_r($staffmember['appointments'] works fine } This is working OK, however, later on in the script, I need to loop through the records again, this time making use of the appointment arrays, however they are unavailable. foreach ($staffmembers as $staffmember) { //do some other stuff //print_r($staffmember['appointments'] no longer does anything } Normally, I would perform the function from the first loop, within the second, however this loop is already nested within two others, which would cause the same sql query to be run 168 times. Can anyone suggest a workaround? Any advice would be greatly appreciated. Thanks

    Read the article

  • Strange sql query result from Mysql and from PHP mysqli_query!

    - by qinHaiXiang
    this is the query command echo from php web page: SELECT DISTINCT FT.file_type_name AS type,FT.file_type_en AS tp,FT.file_type_id AS fti, MATCH(keywords) AGAINST ('words <2' IN BOOLEAN MODE ) AS score FROM movie AS M,file_type AS FT WHERE MATCH (keywords) AGAINST ('words <2' IN BOOLEAN MODE ) AND M.type_cn = FT.file_type_id HAVING score >=1 ORDER BY FT.file_type_order; I am running above query in MySQL tools HeidiSQL and got only tow row records which score are 1.66666 and 2. If I remove the HAVING clause I would got three row records with one's score less than 1. But the same query I get from PHP mysqli_query() were all the three records and the one which score less than 1 became 1. What is the problem. Any tips will be pleasure. Thank you very much!!

    Read the article

  • UIPickerView and empty core data array

    - by Mark
    I have a viewcontroller showing items from a core data entity. I also have a tableview listing records from the same entity. The table is editable, and the user could remove all the records. When this happens, the view controller holding the pickerview bombs because it's looking for records in an empty array. How to prevent this? I'm assuming I need to do something different at objectAtIndex:row... # pragma mark PickerView Section - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { return 1; // returns the number of columns to display. } - (NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { return [profiles count]; // returns the number of rows } - (NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { // Display the profiles we've fetched on the picker Profiles *prof = [profiles objectAtIndex:row]; return prof.profilename; } //If the user chooses from the pickerview - (void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { selectedProfile = [[profiles objectAtIndex:row]valueForKey:@"profilename"]; }

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

  • Can Parallel.ForEach be used safely with CloudTableQuery

    - by knightpfhor
    I have a reasonable number of records in an Azure Table that I'm attempting to do some one time data encryption on. I thought that I could speed things up by using a Parallel.ForEach. Also because there are more than 1K records and I don't want to mess around with continuation tokens myself I'm using a CloudTableQuery to get my enumerator. My problem is that some of my records have been double encrypted and I realised that I'm not sure how thread safe the enumerator returned by CloudTableQuery.Execute() is. Has anyone else out there had any experience with this combination?

    Read the article

  • In SQL Server, can multiple inserts be replaced with a single insert that takes an XML parameter?

    - by Mayo
    So I have an existing ASP.NET solution that uses LINQ-to-SQL to insert data into SQL Server (5 tables, 110k records total). I had read in the past that XML could be passed as a parameter to SQL Server but my google searches turn up results that store the XML directly into a table. I would rather take that XML parameter and insert the nodes as records. Is this possible? How is it done (i.e. how is the XML parameter used to insert records in T-SQL, how should the XML be formatted)? Note: I'm researching other options like SQL bulk copy and I know that SSIS would be a good alternative. I want to know if this XML approach is feasible.

    Read the article

  • Need help to format the result page after searching

    - by kshama
    Hi, I have built a small text based search engine on ROR which will display relevant records having a specified search word in it.since few of the records has more than 1000 words i have truncated each result set to 200 characters.My views file search.html.erb looks like this <% @results_with_ranks.each do |result| -%> <% content_id = rtable.find(result[0]).content_id %> <% content= Content.find(content_id) %> <%= truncate content.body, :length => 200 %><br/> <p> Record id <%= content.id %></p> <hr style="color:blue"> <% end -%> I want to provide an option so that whenever any truncated record is selected its entire body has to be displayed. I also want to paginate the result page displaying some fixed number of records per page.Can any body help me in doing this? Thanks in advance.

    Read the article

  • How do I keep a count of undefined strings within a loop using PHP?

    - by mike
    I'm using a loop within a loop to try to generate keyword combinations and also find the ones that have been used the most. My outside loop just queries a list of keywords (lets use "chicago" as our first keyword, 3 records were found). The inside loop finds all the records in the "posts" table where keyword = "chicago". Within this loop, I need to generate strings based on info I found in the database. Which, would look something like "chicago bulls", "chicago bears", "chicago cubs" etc... I know how to do everything up until this point, but how do I temporary hold these generated strings and count how many times they have been found within the 3 records?

    Read the article

  • How to host a naked domain on a CDN?

    - by rjw79
    If I have a domain that I wish to serve "naked" eg http://examp.le/, and efficiently with a CDN, what are my options? The issue is that the CDNs I looked at all want you to use a CNAME so that they can do geo ip lookup. CNAMES are not meant to be served at the same level as other records, and this apparently breaks some dns resolvers. You at least need SOA and MX records at the same level for a naked domain. The only solutions are: having A records in your own dns, thus skipping the geo ip, or finding a cdn who will allow delegation of the whole domain so they can do geo ip things for the A record directly. I've tried googling and can't find any Cdn who offers this. Any ideas? I looked closely at Amazon cloudfront and rackspace cloudfiles. I couldn't work it out for those.

    Read the article

  • Implementing a 'many-to-many' database

    - by Raven Dreamer
    Greetings, stack*overflow* In my database, I already have one table, 'contacts' that contains records of individual people. I also have several other tables in my database which represent "skill sets" that contain records denoting a particular skill. 1) Am I correct in plotting this as a "many-to-many" relationship? (each contact can have multiple skill sets, and each skill set can belong to multiple contacts) 2) I'm new to databases -- do I want to link the tables? 3) Is there a way to implement this in my program (C# + windows forms) such that for any given record in the 'contacts' table, either the names of all associated 'skill set' tables or all the 'skill' records associated with the 'contact' record could be retrieved? (Database is located on SQL Server Express 2008)

    Read the article

  • Is there a way to split the results of a select query into two equal halfs?

    - by Matthias
    I'd like to have a query returning two ResultSets each of which holding exactly half of all records matching a certain criteria. I tried using TOP 50 PERCENT in conjunction with an Order By but if the number of records in the table is odd, one record will show up in both resultsets. Example: I've got a simple table with TheID (PK) and TheValue fields (varchar(10)) and 5 records. Skip the where clause for now. SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID asc results in the selected id's 1,2,3 SELECT TOP 50 PERCENT * FROM TheTable ORDER BY TheID desc results in the selected id's 3,4,5 3 is a dup. In real life of course the queries are fairly complicated with a ton of where clauses and subqueries.

    Read the article

  • mysqli::query returns true on SELECT statement

    - by Travis Pessetto
    I have an application that reads in one of its classes: public function __construct() { global $config; //Establish a connection to the database and get results set $this->db = new Database("localhost",$config["dbuser"],$config["dbpass"],"student"); $this->records = $this->db->query("SELECT * FROM major") or die("ERROR: ".$this->db->error); echo "<pre>".var_dump($this->records)."</pre>"; } My problem is that var_dump shows that $this->records is a boolean. I've read the documentation and see that the SELECT query should return a result set. This is the only query used by the application. Any ideas where I am going wrong?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >