Search Results

Search found 1011 results on 41 pages for 'dirty henry'.

Page 33/41 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • What is the easiest way to get the property value from a passed lambda expression in an extension me

    - by Andrew Siemer
    I am writing a dirty little extension method for HtmlHelper so that I can say something like HtmlHelper.WysiwygFor(lambda) and display the CKEditor. I have this working currently but it seems a bit more cumbersome than I would prefer. I am hoping that there is a more straight forward way of doing this. Here is what I have so far. public static MvcHtmlString WysiwygFor<TModel, TProperty>(this HtmlHelper<TModel> helper, Expression<Func<TModel, TProperty>> expression) { return MvcHtmlString.Create(string.Concat("<textarea class=\"ckeditor\" cols=\"80\" id=\"", expression.MemberName(), "\" name=\"editor1\" rows=\"10\">", GetValue(helper, expression), "</textarea>")); } private static string GetValue<TModel, TProperty>(HtmlHelper<TModel> helper, Expression<Func<TModel, TProperty>> expression) { MemberExpression body = (MemberExpression)expression.Body; string propertyName = body.Member.Name; TModel model = helper.ViewData.Model; string value = typeof(TModel).GetProperty(propertyName).GetValue(model, null).ToString(); return value; } private static string MemberName<T, V>(this Expression<Func<T, V>> expression) { var memberExpression = expression.Body as MemberExpression; if (memberExpression == null) throw new InvalidOperationException("Expression must be a member expression"); return memberExpression.Member.Name; } Thanks!

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • PHP array value becomes blank. What is going on?

    - by Michael Bruce
    I have written a web page that works fine expect for some weird behavior. The code below gets all expected values and populates them correctly except for $v-data["quick_phone_id"] and $v-data["quick_email_id"] which are integers. Those values come out blank in the string I am creating. The value for $v-data["id"] is another integer and works as expected. My only clue is that when I uncomment the commented out line, the code works properly. So I'm guessing this has to do with referencing getting broken for the array. Any ideas? I'd like to fix my code and my PHP knowledge. $contacts = ContactInfo::loadMyContacts($userId); $sb = new StringBuilder(); $idx = 0; //$vals = "vals: ".$contacts[0]->data["quick_phone_id"]; $sb->append(' dataRows = ['); foreach($contacts as $k => $v) { $sb->append('{ id:"'.strval($v->data["id"]).'",'); $sb->append('url:"edit_contact.php?id='.$v->data["id"].'",'); $sb->append('gname:"'.$v->data["given_name"].'",'); $sb->append('fname:"'.$v->data["family_name"].'",'); $sb->append('phone1id2:"'.strval($v->data["quick_phone_id"]).'",'); $sb->append('phone1type:"'.$v->data["quick_phone_type"].'",'); $sb->append('phone1:"'.$v->data["quick_phone"].'",'); $sb->append("email1id2:'".strval($v->data["quick_email_id"])."',"); $sb->append('email1type:"'.$v->data["quick_email_type"].'",'); $sb->append("email1:'".$v->data["quick_email"]."',"); $sb->append("dirty:false },\n"); } $sb->append('];');

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • How do I serialise a graph in Java without getting StackOverflowException?

    - by Tim Cooper
    I have a graph structure in java, ("graph" as in "edges and nodes") and I'm attempting to serialise it. However, I get "StackOverflowException", despite significantly increasing the JVM stack size. I did some googling, and apparently this is a well known limitation of java serialisation: that it doesn't work for deeply nested object graphs such as long linked lists - it uses a stack record for each link in the chain, and it doesn't do anything clever such as a breadth-first traversal, and therefore you very quickly get a stack overflow. The recommended solution is to customise the serialisation code by overriding readObject() and writeObject(), however this seems a little complex to me. (It may or may not be relevant, but I'm storing a bunch of fields on each edge in the graph so I have a class JuNode which contains a member ArrayList<JuEdge> links;, i.e. there are 2 classes involved, rather than plain object references from one node to another. It shouldn't matter for the purposes of the question). My question is threefold: (a) why don't the implementors of Java rectify this limitation or are they already working on it? (I can't believe I'm the first person to ever want to serialise a graph in java) (b) is there a better way? Is there some drop-in alternative to the default serialisation classes that does it in a cleverer way? (c) if my best option is to get my hands dirty with low-level code, does someone have an example of graph serialisation java source-code that can use to learn how to do it?

    Read the article

  • Options for displaying OG groups a node is published for on node page?

    - by Erik Töyrä
    What I want I have several OG groups in which content can be published. I would like to display which OG groups a node has been published for when viewing the node page. Like in "This page is published for: Department A, Department B." The code snipped below shows the data I have in the $node object in node.tpl.php. This data is generated by the OG module. Extracted data from $node ... [og_groups] => Array ( [993] => 993 [2078] => 2078 ) [og_groups_both] => Array ( [993] => Department A [2078] => Department B ) ... I know I could loop through the og_groups_both array in node.tpl.php and generate the output from there, but it feels like a quite dirty solution. The ideal solution would be to have a $og_groups variable in node.tpl.php, similiar to how $submitted is used in node.tpl.php (see below). Example of how $submitted is used <?php if ($submitted): ?> <div class="submitted"><?php print $submitted; ?></div> <?php endif; ?> Should I use hook_load() in a custom module to insert the new variable $og_groups in $node? What options do I have and which solution would you recommend?

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • How do I update a webpage with the progress of a server-side task?

    - by Jim B
    Hi everyone, I'm working on a web project that takes the results from a survey type application, and runs them through a bunch of calculations to come up with some recommended suggestions for the user. Now, this calculation might take a minute or so, so I'd like to be able to give the user some update on it's progress. Obviously, the quick and dirty solution would be to put up a message along the lines of "Please wait while we calculate your recommendations" with a spinning gear type graphic. (or whatever, you get the point..). Once the task completes, I'd redirect to the results page. However, I'd like to do something a little more flashy. Maybe something along the lines of a progress bar, and even prompt the user with what's going on in the background. For example, give them a progress bar, with some text that says "Now processing suggestion 3 of 15; Multi-Vitamin" Any suggestions on how I could set this up? One way I'm thinking of doing it is to write the progress of the calculation method to the HttpContext, and slap up an update panel and timer that would show/refresh this info. I've also checked out maybe building a web service/method, and then poll that at some interval. Has anybody done something similar to this before? What worked for you? Thanks again! ~Jim

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • jQuery Validation plugin: disable validation for specified submit buttons when there is submitHandle

    - by ccppjava
    Ok, I am using umbraco forum module, and on the comment form it uses jquery validate plugin. The problem is that I have added a search button on the same page using a UserControl, the search submit button triggers the comment form submission validation. I have done some research, and added the 'cancel' css class to the button. This bypass the first validation issue, however, it still fall into the 'submitHandler'. Have read the source code and find a way to detect whether the search button triggers the submission. however, there is not a nice way to bypass the handler. I am currently using a ugly way to do it: create javascript errors! I would like to know a nicer way to do the job. Many thanks! Btw, I am currently using: submitHandler: function (form) { $(form).find(':hidden').each(function () { var name = $(this).attr('name'); if (name && name.indexOf('btnSearch') === name.length - 'btnSearch'.length) { // this is a dirty hack to avoid the validate plugin to work for the button eval("var error = 1 2 ''"); } }); // original code from umbraco ignored here.... } ...............

    Read the article

  • Python refuses text.replace() in one environment

    - by gx
    Hi fellow programmers, I've been mocking about with the following bit of dirty support-code for a pylons app, which works fine in a python-shell, a separate python file, or when running in paster. Now, we've put the application on-line through mod_wsgi and apache and this specific piece of code stopped working completely. First off, the code itself: def fixStyle(self, text): t = text.replace('<p>', '<p style="%s">' % (STYLEDEF,)) t = t.replace('class="wide"', 'style="width: 125px; %s"' % (DEFSTYLE,)) t = t.replace('<td>', '<td style="%s">' % (STYLEDEF,)) t = t.replace('<a ', '<a style="%s" ' % (LINKSTYLE,)) return t It seems pretty straightforward, and to be honest, it is. So what happens when I put a piece of text in it, for example: <table><tr><td>Test!</td></tr></table> The output should be: <table><tr><td style="stuff-from-styledef">Test!</td></tr></table> and it is, on most systems. When we put it through the app on Apache/mod_wsgi though, the following happens: <table><tr><td>Test!</td></tr></table> You guessed it. I'm currently at a loss and have no idea where to go next. Googling doesn't really work out, so I'm hoping on you guys to help out and perhaps point out a fundamental issue with using whatever-is-causing-this. If anything is missing I'll edit it in.

    Read the article

  • Socket select() Handling Abrupt Disconnections

    - by Genesis
    I am currently trying to fix a bug in a proxy server I have written relating to the socket select() call. I am using the Poco C++ libraries (using SocketReactor) and the issue is actually in the Poco code which may be a bug but I have yet to receive any confirmation of this from them. What is happening is whenever a connection abruptly terminates the socket select() call is returning immediately which is what I believe it is meant to do? Anyway, it returns all of the disconnected sockets within the readable set of file descriptors but the problem is that an exception "Socket is not connected" is thrown when Poco tries to fire the onReadable event handler which is where I would be putting the code to deal with this. Given that the exception is silently caught and the onReadable event is never fired, the select() call keeps returning immediately resulting in an infinite loop in the SocketReactor. I was considering modifying the Poco code so that rather than catching the exception silently it fires a new event called onDisconnected or something like that so that a cleanup can be performed. My question is, are there any elegant ways of determining whether a socket has closed abnormally using select() calls? I was thinking of using the exception message to determine when this has occured but this seems dirty to me.

    Read the article

  • How do I load PersistentDocuments into the same window

    - by Brad Stone
    I want to open NSPersistentDocuments and load them into the same window one at a time. I'm almost there but missing some steps. Hopefully someone can help me. I have a few saved documents on the hard drive. On launch my app opens to an untitled NSPersistentDocument and creates a separate NSWindowController. When I press the button to load file 1 off the hard drive the data appears in the fields but two things are wrong that I can see: 1) changing the data doesn't make the document dirty 2) choosing save updates the persistentstore (I know this because when I open the file again I see the changes) but I get an error: +entityForName: could not locate an NSManagedObjectModel for entity name 'Book' Here's my code which is in the WindowController that was launched initially with the untitled document. This code isn't perfect. For example, I know I should processPendingChanges and save the current doc before I load the new one. This is test code to try to get over this hurdle. - (IBAction)newBookTwo:(id)sender { NSDocumentController *dc = [NSDocumentController sharedDocumentController]; NSURL *url = [NSURL fileURLWithPath:[@"~/Desktop/File 2.binary" stringByExpandingTildeInPath]]; NSError *error; MainWindowDocument *thisDoc = [dc openDocumentWithContentsOfURL:url display:NO error:&error]; [self setDocument:thisDoc]; [self setManagedObjectContext:[thisDoc managedObjectContext]]; } Thanks!

    Read the article

  • Learning PHP - start out using a framework or no?

    - by Kevin Torrent
    I've noticed a lot of jobs in my area for PHP. I've never used PHP before, and figure if I can get more opportunities if I pick it up then it might be a good idea. The problem is that PHP without any framework is ugly and 99% of the time really bad code. All the tutorials and books I've seen are really lousy - it never shows any kind of good programming practice but always the quick and dirty kind of way of doing things. I'm afraid that trying to learn PHP this way will just imprint these bad practices in my head and make me waste time later trying to unlearn them. I've used C# in the past so I'm familiar with OOP and software design patterns and similar. Should I be trying to learn PHP by using one of the better known frameworks for it? I've looked at CakePHP, Symfony and the Zend Framework so far; Zend seems to be the most flexible without being too constraining like Cake and Symfony (although Symfony seemed less constraining than CakePHP which is trying too hard to be Ruby on Rails), but many tutorials for Zend I've seen assume you already know PHP and want to learn to use the framework. What would be my best opportunity for learning PHP, but learning GOOD PHP that uses real software engineering techniques instead of spaghetti code? It seems all the PHP books and resources either assume you are just using raw PHP and therefore showcase bade practices, or that you already know PHP and therefore don't even touch on parts of the language.

    Read the article

  • How to Implement Rich Document Editor for iPhone

    - by benjismith
    I'm just getting started on a new iPhone/iPad development project, and I need to display a document with rich styled text (potentially with embedded images). The user will touch the document, dragging to highlight individual words or multiline text spans. When the text is highlighted, a context menu will appear, letting them change the color of highlighting or add margin notes (or other various bits of structured metadata). If you're familiar with adding comments to a Word document (or annotating a PDF), then this is the same sort of thing. But in my case, the typical user will spend many many hours within the app, adding thousands (in some cases, tens of thousands) of small annotations to the central document. All of those bits of metadata will be stored locally awaiting synchronization with a remote web service. I've read other pieces of advice, where developers suggest creating a UIWebView control and passing it an HTML string. But that seems kind of clunky, especially with all the context-sensitivity that I want to include. Anyhow, I'm brand new to iPhone development and Objective-C, though I have ten years of software development experience, using a variety of languages on many different platforms, so I'm not worried about getting my hands dirty writing new functionality from scratch. But if anyone has experience building a similar kind of component, I'm interested in hearing strategies for enabling that kind of rich document markup and annotation.

    Read the article

  • why does setting stderr=subprocess.STDOUT fix a subprocess.check_output call?

    - by ShankarG
    I have a python script running on a small server that is called in three different ways - from within another python script, by cron, or by gammu-smsd (an SMS daemon with the wonderful mobile utility [gammu]). The script is for maintenance and contained the following kludge to measure used space on the system (presumably this is possible from within Python, but this was quick and dirty): reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], shell=True)[0:-1] Oddly enough this line would only fail when the script was called by a shell script running from gammu-smsd. The line would fail with a CalledProcessError exception saying "returned exit status 2", even though the output attribute of the CalledProcessError object contained the correct output. The only command in the sequence of shell commands that would give such an error status would be awk, with status 2 indicating a fatal error. If the python script with this line was called by cron, by another python script, or from the command line, this line would work fine. I broke my head trying to fix the environment for the script, thinking this must be the problem. Finally though I put in stderr=subprocess.STDOUT, like so: reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], stderr=subprocess.STDOUT, shell=True)[0:-1] This was a debug measure to help me figure out if some output was coming on stderr. But after this the script started working, even when called from gammu-smsd! Why might this be the case? I ask for future reference when using subprocess...

    Read the article

  • Search and Matching algorithm

    - by Tony
    Hello everyone. I am trying to come up with an algorithm to do the following: I have total 12 cells that I need to fill until program stops. I have 3 rows and each row has 4 columns. As an example, let me illustrate this as in airplane. So you have 3 rows and each row has 4 columns and you have window/aisle seats. Each row will have a window seat, aisle seat, aisle seat and window seat (|WA AW| Just like seat arrangement in airplane). At each iteration (different set of passengers), there would be some number of passengers (between 1 and 12) and I need to seat them closest together possible (Seat together). And I do this for next group (each iteration) until program stops (It will stop when I am done with every group). For example, I have 3 passengers (A,B,and C) and A wants to seat in Window, B wants to seat in Aisle and C wants to seat in Window. Assuming that all the seats (all 12) are available, I could place them like |A# BC| or |CB #A| and mark the seats dirty (so I don’t pick same seats again for next passengers). And I do this for next group (iteration). I am not sure if this right forum, but if somebody can advise me how I should accomplish, I would really appreciate it. Thanks.

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • client-side data storage and retrieval with html and javascript

    - by pedalpete
    I'm building what I am hoping to be a fairly simple, quick and dirty demo app. So far, I've managed to build a bunch of components using only html and javascript. I know that eventually I'll hook-up a db, but at this point I'm just trying to show off some functionality. In the page, a user can select a bunch of other users (like friends). Then they go to a separate html page and there is some sorting info based on the selected users. So my first attempt was to put the selected users object into a cookie, and retrieve the cookie on the second page. Unfortunately, if the user changed their selection, the cookie wasn't getting updated, and my searches on StackOverflow seemed to say that deleting and updating cookies is unreliable. I tried function updateCookie(updatedUserList){ jQuery.cookie('userList',null); jQuery.cookie('userList',updatedUserList); } but though it set the cookie to null, it wouldn't update it on the second value. So I decided to put the selected users object into a form. Unfortunately, it looks like I can't retrieve the contents from the form on the client-side, only on the server-side. Is there another way to do this? I've worked in PHP and Rails, but I'm trying to do this quickly and simply before building it out into something larger and am trying to avoid any server-side processing for now, which I have managed to do up to this point.

    Read the article

  • Clean solution to this ruby iterator trickiness?

    - by mstksg
    k = [1,2,3,4,5] for n in k puts n if n == 2 k.delete(n) end end puts k.join(",") # Result: # 1 # 2 # 4 # 5 # [1,3,4,5] # Desired: # 1 # 2 # 3 # 4 # 5 # [1,3,4,5] This same effect happens with the other array iterator, k.each: k = [1,2,3,4,5] k.each do |n| puts n if n == 2 k.delete(n) end end puts k.join(",") has the same output. The reason this is happening is pretty clear...Ruby doesn't actually iterate through the objects stored in the array, but rather just turns it into a pretty array index iterator, starting at index 0 and each time increasing the index until it's over. But when you delete an item, it still increments the index, so it doesn't evaluate the same index twice, which I want it to. This might not be what's happening, but it's the best I can think of. Is there a clean way to do this? Is there already a built-in iterator that can do this? Or will I have to dirty it up and do an array index iterator, and not increment when the item is deleted?

    Read the article

  • How do I determine whether calculation was completed, or detect interrupted calculation?

    - by BenTobin
    I have a rather large workbook that takes a really long time to calculate. It used to be quite a challenge to get it to calculate all the way, since Excel is so eager to silently abort calculation if you so much as look at it. To help alleviate the problem, I created some VBA code to initiate the the calculation, which is initiated by a form, and the result is that it is not quite as easy to interrupt the calculation process, but it is still possible. (I can easily do this by clicking the close X on the form, but I imagine there are other ways) Rather than taking more steps to try and make it harder to interrupt calculation, I'd like to have the code detect whether calculation is complete, so it can notify the user rather than just blindly forging on into the rest of the steps in my code. So far, I can't find any way to do that. I've seen references to Application.CalculationState, but the value is xlDone after I interrupt calculation, even if I interrupt the calculation after a few seconds (it normally takes around an hour). I can't think of a way to do this by checking the value of cells, since I don't know which one is calculated last. I see that there is a way to mark cells as "dirty" but I haven't been able to find a way to check the dirtiness of a cell. And I don't know if that's even the right path to take, since I'd likely have to check every cell in every sheet. The act of interrupting calculation does not raise an error, so my ON ERROR doesn't get triggered. Is there anything I'm missing? Any ideas? Any ideas?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >