Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 493/1698 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • What is the most efficient procedure for implementing a sortable ajax list on the backend?

    - by HenryL
    The most common method is to assign a sequential order field for each item in the list and do an update that maintains the sequence with every ajax sort operation. Unfortunately, this requires an update to each item of the list every time someone sorts. This is fine for small lists, but what's the best way to implement sorting for larger lists that are constantly updated? I am looking for something that minimizes DB IO.

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Can somebody suggest good source for IMS?

    - by Raja Reddy
    I would like to learn working with IMS, can somebody suggest me a good source? I'm not sure if it matters to say that I have quite good exposure and experience with INSYNC DB2 and QMF. So anything that can depict and explain the advantages and disadvantages over IMS would be really helpful. Thanks for your help beforehand..

    Read the article

  • What is faster in MySQL? WHERE sub request = 0 or IN list

    - by Nicolas Manzini
    Hello I was wondering what is better in MySQL. I have a SELECT querry that exclude every entry associated to a banned userID currently I have a subquerry clause in the WHERE statement that goes like AND (SELECT COUNT(*) FROM TheBlackListTable WHERE userID = userList.ID AND blackListedID = :userID2 ) = 0 Which will accept every userID not present in the TheBlackListTable Would it be faster to retrieve first all Banned ID in a previous request and replace the previous clause by AND creatorID NOT IN listOfBannedID Thank you!

    Read the article

  • Sqlite and Python -- return a dictionary using fetchone()?

    - by AndrewO
    I'm using sqlite3 in python 2.5. I've created a table that looks like this: create table votes ( bill text, senator_id text, vote text) I'm accessing it with something like this: v_cur.execute("select * from votes") row = v_cur.fetchone() bill = row[0] senator_id = row[1] vote = row[2] What I'd like to be able to do is have fetchone (or some other method) return a dictionary, rather than a list, so that I can refer to the field by name rather than position. For example: bill = row['bill'] senator_id = row['senator_id'] vote = row['vote'] I know you can do this with MySQL, but does anyone know how to do it with SQLite? Thanks!!!

    Read the article

  • Deploying and hosting scala in the cloud?

    - by TiansHUo
    I am starting a web app considering scalability as one of the top priorities. What would be the benefits of this: cassandra scala lift vs the traditional LAMP on the cloud? Since from what I've read, please correct me, the cloud itself is scalable I have never seen anyone deploy scala on the cloud before. Is it worth the effort to learn the platform? Is it ready for production use?

    Read the article

  • MySQL Table structure of thumb UP & DOWN for comments system ?

    - by Axel
    Hello, i already created a table for comments but i want to add the feature of thumb Up and Down for comments like Digg and Youtube, i use php & mysql and i'm wondering What's the best table scheme to implement that so comments with many likes will be on the top. This is my current comments table : comments(id,user,article,comment,stamp) Note: Only registred will be able to vote, so there isn't need to restrict the votes by IP Thanks

    Read the article

  • T-SQL MERGE - finding out which action it took

    - by IanC
    I need to know if a MERGE statement performed an INSERT. In my scenario, the insert is either 0 or 1 rows. Test code: DECLARE @t table (C1 int, C2 int) DECLARE @C1 INT, @C2 INT set @c1 = 1 set @c2 = 1 MERGE @t as tgt USING (SELECT @C1, @C2) AS src (C1, C2) ON (tgt.C1 = src.C1) WHEN MATCHED AND tgt.C2 != src.C2 THEN UPDATE SET tgt.C2 = src.C2 WHEN NOT MATCHED BY TARGET THEN INSERT VALUES (src.C1, src. C2) OUTPUT deleted.*, $action, inserted.*; SELECT inserted.* The last line doesn't compile (no scope, unlike a trigger). I can't get access to @action, or the output. Actually, I don't want any output meta data. How can I do this?

    Read the article

  • top-k selection/merge

    - by tcurdt
    I have n sorted lists. These lists are quite long (300000+ tuples). Selecting the top 10 of the individual lists is of course trivial - they are right at the head of the lists. Where it gets more interesting is when I want the top 10 of all the sorted lists. The question is whether there is an algorithm to calculate the combined top 10 having the correct order while cutting off the long tail of the lists. The goal is to reduce the required space. And if there is: How does one find the limit where is is safe to cut? Note: The actual counts are not important. Only the order is.

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • (Oracle) How get total number of results when using a pagination query?

    - by BestPractices
    I am using Oracle 10g and the following paradigm to get a page of 15 results as a time (so that when the user is looking at page 2 of a search result, they see records 16-30). select * from ( select rownum rnum, a.* from (my_query) a where rownum <= 30 ) where rnum > 15; Right now I'm having to run a separate SQL statement to do a "select count" on "my_query" in order to get the total number of results for my_query (so that I can show it to the user and use it to figure out total number of pages, etc). Is there any way to get the total number of results without doing this via a second query, i.e. by getting it from above query? I've tried adding "max(rownum)", but it doesn't seem to work (I get an error [ORA-01747] that seems to indicate it doesnt like me having the keyword rownum in the group by). My rationale for wanting to get this from the original query rather than doing it in a separate SQL statement is that "my_query" is an expensive query so I'd rather not run it twice (once to get the count, and once to get the page of data) if I dont have to; but whatever solution I can come up with to get the number of results from within a single query (and at the same time get the page of data I need) should not add much if any additional overhead, if possible. Please advise. Here is exactly what I'm trying to do for which I receive an ORA-01747 error because I believe it doesnt like me having ROWNUM in the group by. Note, If there is another solution that doesnt use max(ROWNUM), but something else, that is perfectly fine too. This solution was my first thought as to what might work. SELECT * FROM (SELECT r.*, ROWNUM RNUM, max(ROWNUM) FROM (SELECT t0.ABC_SEQ_ID AS c0, t0.FIRST_NAME, t0.LAST_NAME, t1.SCORE FROM ABC t0, XYZ t1 WHERE (t0.XYZ_ID = 751) AND t0.XYZ_ID = t1.XYZ_ID ORDER BY t0.RANK ASC) r WHERE ROWNUM <= 30 GROUP BY r.*, ROWNUM) WHERE RNUM > 15

    Read the article

  • Get the last checked checkboxes...

    - by Sara
    Hi everyone, I'm not sure how to accomplish this issue which has been confusing me for a few days. I have a form that updates a user record in MySQL when a checkbox is checked. Now, this is how my form does this: if (isset($_POST['Update'])) { $paymentr = $_POST['paymentr']; //put checkboxes array into variable $paymentr2 = implode(', ', $paymentr); //implode array for mysql $query = "UPDATE transactions SET paymentreceived=NULL"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentdate='0000-00-00'"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentreceived='Yes' WHERE id IN ($paymentr2)"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentdate=NOW() WHERE id IN ($paymentr2)"; $result = mysql_query($query); foreach ($paymentr as $v) { //should collect last updated records and put them into variable for emailing. $query = "SELECT id, refid, affid FROM transactions WHERE id = '$v'"; $result = mysql_query($query) or die("Query Failed: ".mysql_errno()." - ".mysql_error()."<BR>\n$query<BR>\n"); $trans = mysql_fetch_array($result, MYSQL_ASSOC); $transactions .= '<br>User ID:'.$trans['id'].' -- '.$trans['refid'].' -- '.$trans['affid'].'<br>'; } } Unfortunately, it then updates ALL the user records with the latest date which is not what I want it to do. The alternative I thought of was, via Javascript, giving the checkbox a value that would be dynamically updated when the user selected it. Then, only THOSE checkboxes would be put into the array. Is this possible? Is there a better solution? I'm not even sure I could wrap my brain around how to do that WITH Javascript. Does the answer perhaps lie in how my mysql code is written? Thanks - I sincerely appreciate it!!!

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • Building a wiki like data model in rails question.

    - by lillq
    I have a data model in which I would like to have an item that has a description that can be edited. I would like to also keep track of all edits to the item. I am running into issues with my current strategy, which is: class Item < ActiveRecord::Base has_one :current_edit, :class_name => "Edit", :foreign_key => "current_edit_id" has_many :edits end class Edit < ActiveRecord::Base belongs_to :item end Can the Item have multiple associations to the same class like this? I was thinking that I should switch to keeping track of the edit version in the Edit object and then just sorting the has_many relationship base on this version.

    Read the article

  • Rails advanced queries with join and sum calculation

    - by Dustin Brewer
    I have two models: companies and expenses. Companies have many expenses and expenses belong to companies. My expense model has an 'amount' column. I was wondering if there is a way to perform a find based on a date range and the amount column of the expenses. Something like top 3 companies by total expense amounts over a 7 day period. I've tried for the better part of the day to get this to work, I've attempted joins, chaining named scopes, raw sql, etc. and I'm not having any luck. Thanks for the help.

    Read the article

  • How to figure out which record has been deleted in an effiecient way?

    - by janetsmith
    Hi, I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2. I have only read access to tables, so I can't create any table or trigger in Oracle db1. The challenge I am facing is, how to detect record deletion in Oracle? The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data. old_data contains primary key field from tahle of interest in Oracle. Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows: SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data) I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this? Thanks.

    Read the article

  • ResultSet and aggregation

    - by kachanov
    Ok, I admit my situation is special There is a data system that supports SQL-92 and JDBC interface However the SQL requets are pretty expensive, and in my application I need to retreive the same data multiple times and aggregate it ("group by") on different fields to show different dimensions of the same data. For example on one screen I have three tables that show the same set or records but aggregated by City (1st grid), by Population (2nd grid), by number of babies (3rd grid) This amounts to 3 SQL queries (which is very slow), UNLESS anyone of you can suggest any idea any library from apache commons or from google code, so that I can select all records into ResultSet and get 3 arrays of data group by different fields from this single ResultSet. Am I'm missing some obvious and unexpected solution to this problem?

    Read the article

  • select for update with ruby oci8

    - by ash34
    how do I do a 'select for update' and then 'update' the row using ruby oci8. I have two fields counter1 and counter2 in a table which has only 1 record. I want to select the values from this table and then increment them by locking the row using select for update. thanks.

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >