Search Results

Search found 7204 results on 289 pages for 'almost dead'.

Page 225/289 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • Override ActiveRecord#save, Method Alias? Trying to mixin functionality into save method...

    - by viatropos
    Here's the situation: I have a User model, and two modules for authentication: Oauth and Openid. Both of them override ActiveRecord#save, and have a fair share of implementation logic. Given that I can tell when the user is trying to login via Oauth vs. Openid, but that both of them have overridden save, how do "finally" override save such that I can conditionally call one of the modules' implementations of it? Here is the base structure of what I'm describing: module UsesOauth def self.included(base) base.class_eval do def save puts "Saving with Oauth!" end def save_with_oauth save end end end end module UsesOpenid def self.included(base) base.class_eval do def save puts "Saving with OpenID!" end def save_with_openid save end end end end module Sequencer def save if using_oauth? save_with_oauth elsif using_openid? save_with_openid else super end end end class User < ActiveRecord::Base include UsesOauth include UsesOpenid include Sequencer end I was thinking about using alias_method like so, but that got too complicated, because I might have 1 or 2 more similar modules. I also tried using those save_with_oauth methods (shown above), which almost works. The only thing that's missing is that I also need to call ActiveRecord::Base#save (the super method), so something like this: def save_with_oauth # do this and that super.save # the rest end But I'm not allowed to do that in ruby. Any ideas for a clever solution to this?

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • What is the correct way to handle object which is instance of class in apache.xerces?

    - by Roman
    Preface: I'm working on docx parser for java. docx format is based on xml. When I read document its parts are being unmarshalled (with JAXB). And I get a tree of certain elements based on xml markup. Almost problem: But some elements (which are at very deep xml level) returned not as certain class from docx spec (i.e. CTStyle, CTDrawing, CTInline etc) but as Object. Those objects are indeed instances of xerces classes, e.g. ElementNSImpl. Problem: How should I handle these objects. The simplest approach is: CTGraphicData gData = getGraphicData (); Object obj = gData.getAny().get(0); ElementNSImpl element = (ElementNSImpl)obj; But it doesn't seem to be a good solution. I've never worked with xerces directly. What is the better way to do this casting? (If anyone also give me a tip about right way to iterate through nodes it would be great).

    Read the article

  • What is the best way to translate this recursive python method into Java?

    - by Simucal
    In another question I was provided with a great answer involving generating certain sets for the Chinese Postman Problem. The answer provided was: def get_pairs(s): if not s: yield [] else: i = min(s) for j in s - set([i]): for r in get_pairs(s - set([i, j])): yield [(i, j)] + r for x in get_pairs(set([1,2,3,4,5,6])): print x This will output the desire result of: [(1, 2), (3, 4), (5, 6)] [(1, 2), (3, 5), (4, 6)] [(1, 2), (3, 6), (4, 5)] [(1, 3), (2, 4), (5, 6)] [(1, 3), (2, 5), (4, 6)] [(1, 3), (2, 6), (4, 5)] [(1, 4), (2, 3), (5, 6)] [(1, 4), (2, 5), (3, 6)] [(1, 4), (2, 6), (3, 5)] [(1, 5), (2, 3), (4, 6)] [(1, 5), (2, 4), (3, 6)] [(1, 5), (2, 6), (3, 4)] [(1, 6), (2, 3), (4, 5)] [(1, 6), (2, 4), (3, 5)] [(1, 6), (2, 5), (3, 4)] This really shows off the expressiveness of Python because this is almost exactly how I would write the pseudo-code for the algorithm. I especially like the usage of yield and and the way that sets are treated as first class citizens. However, there in lies my problem. What would be the best way to: 1.Duplicate the functionality of the yield return construct in Java? Would it instead be best to maintain a list and append my partial results to this list? How would you handle the yield keyword. 2.Handle the dealing with the sets? I know that I could probably use one of the Java collections which implements that implements the Set interface and then using things like removeAll() to give me a set difference. Is this what you would do in that case? Ultimately, I'm looking to reduce this method into as concise and straightforward way as possible in Java. I'm thinking the return type of the java version of this method will likely return a list of int arrays or something similar. How would you handle the situations above when converting this method into Java?

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Which SCM/VCS cope well with moving text between files?

    - by pfctdayelise
    We are having havoc with our project at work, because our VCS is doing some awful merging when we move information across files. The scenario is thus: You have lots of files that, say, contain information about terms from a dictionary, so you have a file for each letter of the alphabet. Users entering terms blindly follow the dictionary order, so they will put an entry like "kick the bucket" under B if that is where the dictionary happened to list it (or it might have been listed under both B, bucket and K, kick). Later, other users move the terms to their correct files. Lots of work is being done on the dictionary terms all the time. e.g. User A may have taken the B file and elaborated on the "kick the bucket" entry. User B took the B and K files, and moved the "kick the bucket" entry to the K file. Whichever order they end up getting committed in, the VCS will probably lose entries and not "figure out" that an entry has been moved. (These entries are later automatically converted to an SQL database. But they are kept in a "human friendly" form for working on them, with lots of comments, examples etc. So it is not acceptable to say "make your users enter SQL directly".) It is so bad that we have taken to almost manually merging these kinds of files now, because we can't trust our VCS. :( So what is the solution? I would love to hear that there is a VCS that could cope with this. Or a better merge algorithm? Or otherwise, maybe someone can suggest a better workflow or file arrangement to try and avoid this problem?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • Can addition of an ActionListener be short? Can I add arguments to the actionPerformed?

    - by Roman
    I have a big table containing a button in each cell. These buttons are very similar and do almost the same. If I add an action listener to every button in this way: tmp.addActionListener(new ActionListener(){ @Override public void actionPerformed(ActionEvent evt) { proposition = proposition + action; SwingUtilities.invokeLater(new Runnable() { public void run() { updatePropositionPanel(); } }); } }); Actually, every action listener differ from all others by the value of the action. proposition and updatePropositionPanel are a field and a method of the class. First i thought that I can make it shorter if I do not use inner classes. So, I decided to program a new ActionListener class. But than I realized that in this case "proposition" will not be visible to the instances of this class. Then I decided to add the actionPerformed method to the current class and do that: addActionListener(this). But than I realized that I do not know how give arguments to the actionPerformed method. So, how does it work. Can I add an action listener in a short and elegent way?

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • ASP.NET NamingContainer naming convention

    - by EOLeary
    The Background Hello! I'm working on a project in which the client has required a lot of things to happen on a single page, and this has resulted in a rather large blob of HTML being rendered out to the client browser. The main issue is with input tags (where runat="server" attribute is set), these tend to cause a drastic increase in markup size due to validation, updatepanel triggers, viewstate, and the control markup itself. I've done what I can to reduce the amount of triggers I'm using, I'm compressing the viewstate (to something like 8% of the original viewstate size), I've gotten rid of a lot of ASP.NET Validators and rolled my own, and and I've been using ClientIdMode to reduce the length of the ID attributes of many asp.net elements. All of these combined significantly reduces the amount of HTML being sent to the client, (for example going from 2 megabytes for a request down to 500-600 kb - these are HUGE pages, mind you). The Issue One area which I've been having trouble reducing is simply the auto-generated 'name' attribute of input elements. <input name="ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText" type="text" value="Investigation Week 1" maxlength="100" id="_taskTree_0__taskDetails_0__detailList_0__row_0__descriptionText_0" style="width:170px;"> As you can see above, the name attribute is 139 out of 297 characters, that's almost 50% of the tag markup taken up by that HUGE name. Does anyone have any ideas on how to stick a hook in somewhere in ASP.NET where I can somehow translate these or generate them differently; say instead of ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText, it could be a GUID like 0x0AEED4B6445A11E08F873606E0D72085, which is 105 characters shorter. Any help would be greatly appreciated!

    Read the article

  • forks in C - exercise

    - by Zka
    I try to repeat and learn more advanced uses and options when cutting trees with forks in the jungle of C. But foolishly I find an example which should be very easy as I have worked with forks before and even written some code, but i can't understand it fully. Here comes : main() { if (fork() == 0) { if (fork() == 0) { printf("3"); } else if ((wait(NULL)) > 0) { printf("2"); } } else { if (fork() == 0) { printf("1"); exit(0); } if (fork() == 0) { printf("4"); } } printf("0"); return 0; } Possible solutions are : 3201040 3104200 1040302 4321000 4030201 1403020 where 2, 5 and 6 are correct answers. First of all, shouldn't there be four zeroes in the output? Second... How does one come to the solution at all? Been doing this on paper for almost an hour and I'm not even close to understanding why the given solution are more correct than the false ones (except for nr3 as it can't end with 2 since a 0 must follow). Anyone with his forks in check who can offer some good explanation?

    Read the article

  • Linq duplicate removal with a twist

    - by Danthar
    I got a list that contains al the status items of each order. The problem that i have is that i need to remove all the items of which the status - logdate combination is not the highest. e.g var inputs = new List<StatusItem>(); //note that the 3th id is simply a modifier that adds that amount of secs //to the current datetime, to make testing easier inputs.Add(new StatusItem(123, 30, 1)); inputs.Add(new StatusItem(123, 40, 2)); inputs.Add(new StatusItem(123, 50, 3)); inputs.Add(new StatusItem(123, 40, 4)); inputs.Add(new StatusItem(123, 50, 5)); inputs.Add(new StatusItem(100, 20, 6)); inputs.Add(new StatusItem(100, 30, 7)); inputs.Add(new StatusItem(100, 20, 8)); inputs.Add(new StatusItem(100, 30, 9)); inputs.Add(new StatusItem(100, 40, 10)); inputs.Add(new StatusItem(100, 50, 11)); inputs.Add(new StatusItem(100, 40, 12)); var l = from i in inputs group i by i.internalId into cg select from s in cg group s by s.statusId into sg select sg.OrderByDescending(n => n.date).First() ; This creates a list that returnes me the following: order 123 status 30 date 4/9/2010 6:44:21 PM order 123 status 40 date 4/9/2010 6:44:24 PM order 123 status 50 date 4/9/2010 6:44:25 PM order 100 status 20 date 4/9/2010 6:44:28 PM order 100 status 30 date 4/9/2010 6:44:29 PM order 100 status 40 date 4/9/2010 6:44:32 PM order 100 status 50 date 4/9/2010 6:44:31 PM This is ALMOST correct. However that last line which has status 50 needs to be filtered out as well because it was overruled by status 40 in the historylist. U can tell by the fact that its date is lower then the "last" status-item with the status 40. I was hoping someone could give me some pointers because im stuck.

    Read the article

  • PHP Hashtable array optimisation.

    - by hiprakhar
    I made a PHP app which was taking about ~0.0070sec for execution. Now, I added a hashtable array with about 2000 values. Suddenly the time for execution has gone up to ~0.0700 secs. Almost 10 times the previous value. I tried commenting out the part where I was searching inside the hashtable array (but array was still left defined). Still, the execution time remains about ~0.0500secs. Array is something like: $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); Is there any way to optimize this part? I cannot use Database as I am running this app on Google app engine which is still not supporting JDO database for php. Some more code from the app: function getsubjectinfo($name) { $subjectinfo = array( 'TPT753' => 'Industrial Training', 'TPT801' => 'High Polymeric Engineering', 'TPT802' => 'Corrosion Engineering', 'TPT803' => 'Decorative ,Industrial And High Performance Coatings', 'TPT851' => 'Project'); $name = str_replace("-", "", $name); $name = str_replace(" ", "", $name); if (isset($subjectinfo["$name"])) return "(".$subjectinfo["$name"].")"; else return ""; } Then I am using the following statement 2-3 times in the app: echo $key." ".$this->getsubjectinfo($key)

    Read the article

  • paypal express checkout integration in asp classic

    - by Noam Smadja
    i am trying to figure this out for almost a week now.. with no success.. i am coding in ASP and would love to receive some help. i am following the steps from paypal wizard here: https://www.paypal-labs.com/integrationwizard/ecpaypal/code.php i am collecting all the information on my website i just want to pass it to paypal when the buyer clicks checkout. so i pointed the checkout form to expresschackout.asp as pointed in the wizard. but when i click on the paypal button i get a white page with nothing. no errors no nothing. it just hangs there on expresscheckout.asp shopping cart: ..code showing a review of the shopping cart.. ..i saved the total amount into SESSION("Payment_Amount").. <form action='cart/expresscheckout.asp' METHOD='POST'> <input type='image' name='submit' src='https://www.paypal.com/en_US/i/btn/btn_xpressCheckout.gif' border='0' align='top' alt='Check out with PayPal'/> </form>

    Read the article

  • "Work stealing" vs. "Work shrugging"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy? By "work-shrugging" I mean busy processors pushing excessive work towards less loaded neighbours rather than idle processors pulling work from busy neighbours ("work-stealing"). I think the general scalability should be the same for both strategies. However I believe that it is much more efficient for busy processors to wake idle processors if and when there is definitely work for them to do than having idle processors spinning or waking periodically to speculatively poll all neighbours for possible work. Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome. Clarification/Confession In more detail:- By "Work Shrugging" I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do. I am talking about an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours - just much less often - or rather - only when it makes economic sense to do so. (I am assuming "lock-free, almost wait-free" queue structures in both strategies). Thanks.

    Read the article

  • How can use the currently displayed node to filter a block-level view on that node's page?

    - by Deane
    I have parent/child relationship set up via Node Reference. A Child record can have a Parent record selected from a Node Reference field (this is optional -- I can have Parent-less Children as well). I've created a Views block to appear on the Parent pages, below the content. It's going to show a table of all the Child nodes for that Parent. Problem is, right now it shows every Child node. I need to filter it for just the Parent being displayed. What I need to be able to do is add a filter to this View to effectively say, "Only show the Child nodes that are assigned to the Parent being displayed on this page." So, somehow I need to be able to get the Nid of the currently displaying node (which will be a Parent, in all cases when this block is displayed), and use that in a filter in my View. How exactly can I do this? (Initially I used an attachment view for this (as this page instructs). I created a page view to display the Parent, then an attachment view to display all the Children, then attached that under the page view. This worked, but it was almost absurdly complicated to set up, and it was an undesirable for a number of other reasons -- primarily that my Parent now has two dedicated URLs, it's own node-level page, and the similar page created by this view.) Using Drupal 6.15.

    Read the article

  • Why does the OnDeserialization not fire for XML Deserialization?

    - by Jonathan
    I have a problem which I have been bashing my head against for the better part of three hours. I am almost certain that I've missed something blindingly obvious... I have a simple XML file: <?xml version="1.0" encoding="utf-8"?> <WeightStore xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Records> <Record actual="150" date="2010-05-01T00:00:00" /> <Record actual="155" date="2010-05-02T00:00:00" /> </Records> </WeightStore> I have a simple class structure: [Serializable] public class Record { [XmlAttribute("actual")] public double weight { get; set; } [XmlAttribute("date")] public DateTime date { get; set; } [XmlIgnore] public double trend { get; set; } } [Serializable] [XmlRoot("WeightStore")] public class SimpleWeightStore { [XmlArrayAttribute("Records")] private List<Record> records = new List<Record>(); public List<Record> Records { get { return records; } } [OnDeserialized()] public void OnDeserialized_Method(StreamingContext context) { // This code never gets called Console.WriteLine("OnDeserialized"); } } I am using these in both calling code and in the class files: using System.Xml.Serialization; using System.Runtime.Serialization; I have some calling code: SimpleWeightStore weight_store_reload = new SimpleWeightStore(); TextReader reader = new StringReader(xml); XmlSerializer deserializer = new XmlSerializer(weight_store.GetType()); weight_store_reload = (SimpleWeightStore)deserializer.Deserialize(reader); The problem is that I am expecting OnDeserialized_Method to get called, and it isn't. I suspect it might have something to do with the fact that it's XML deserialization rather than Runtime deserialization, and perhaps I am using the wrong attribute name, but I can't find out what it might be. Any ideas, folks?

    Read the article

  • jquery slideDown menu

    - by Elliott
    Hi, I have a link which once hovered over a box below slides down, once hover is removed it slides back up. I have it working almost to how I want, although in IE8 (Havent tested in other IE yet) the text in the box which slides down is not centerd. In firefox, when it slides down the corners are not smooth, but they are in IE. Live test here Code: <script type="text/javascript"> $(document).ready(function() { $('#nav_top').corners('5px'); $('#nav_bottom').hide(); $('#nav_bottom').corners(); $('#link').hover(function() { $('#nav_top').corners('10px top'); $('#nav_bottom').stop(true, true).slideDown(); $('#nav_bottom').slideDown("slow"); }, function () { $('#nav_bottom').slideUp("fast"); } ); }); </script> <div id="nav_top"> <a href="javascript:void(0);" id="link">Hover</a> <div id="nav_bottom"> <br />Stuff </div> </div> </div> Any advice, also is it better to use .hover or .mouseEnter / mouseLeave? Thanks

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • MYSQL fetch 10 posts, each w/ vote count, sorted by vote count, limited by where clause on posts

    - by nibblebot
    I want to fetch a set of Posts w/ vote count listed, sorted by vote count (e.g.) Post 1 - Post Body blah blah - Votes: 500 Post 2 - Post Body blah blah - Votes: 400 Post 3 - Post Body blah blah - Votes: 300 Post 4 - Post Body blah blah - Votes: 200 I have 2 tables: Posts - columns - id, body, is_hidden Votes - columns - id, post_id, vote_type_id Here is the query I've tried: SELECT p.*, v.yes_count FROM posts p LEFT JOIN (SELECT post_id, vote_type_id, COUNT(1) AS yes_count FROM votes WHERE (vote_type_id = 1) GROUP BY post_id ORDER BY yes_count DESC LIMIT 0, 10) v ON v.post_id = p.id WHERE (p.is_hidden = 0) ORDER BY yes_count DESC LIMIT 0, 10 Correctness: The above query almost works. The subselect is including votes for posts that have is_hidden = 1, so when I left join it to posts, if a hidden post is in the top 10 (ranked by votes), I can end up with records with NULL on the yes_count field. Performance: I have ~50k posts and ~500k votes. On my dev machine, the above query is running in .4sec. I'd like to stay at or below this execution time. Indexes: I have an index on the Votes table that covers the fields: vote_type_id and post_id EXPLAIN id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY p ALL NULL NULL NULL NULL 45985 Using where; Using temporary; Using filesort 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 2 DERIVED votes ref VotingPost VotingPost 4 319881 Using where; Using index; Using temporary; Using filesort

    Read the article

  • export and import utf8 data in mysql: best practices

    - by ChrisRamakers
    We're often faced with the need to send a data file to one of our clients with data from the database he/she needs to translate. Most of the time this export is CSV or XLS. Most of the time we create a csv dump with phpmyadmin and get an xls file in return with the translated data. The problem is that most of the time the data is UTF8 and when the file is returned as xls each and every time we load the data into mysql again we end up with utf8 problems, characters not being displayed properly, etc ... We've already doublechecked everything in mysql from my.conf to column charactersets and everything is set correctly to UTF8. My question is not how to fix the encoding issue since that's been solved but how we would best proceed in the future handling this situation? What export format should we hand over? How should we import (just mysql load data infile or our own processing scripts). What is the general consensus on how to handle this situation? We would like to continue using excel if possible since that's the format almost everybody expects including our clients' translation agencies. Our clients' ease of use is the most important factor here, without overloading us with major issues each time. The best of both worlds :)

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • DDD and MVC: Difference between 'Model' and 'Entity'

    - by Nathan Loding
    I'm seriously confused about the concept of the 'Model' in MVC. Most frameworks that exist today put the Model between the Controller and the database, and the Model almost acts like a database abstraction layer. The concept of 'Fat Model Skinny Controller' is lost as the Controller starts doing more and more logic. In DDD, there is also the concept of a Domain Entity, which has a unique identity to it. As I understand it, a user is a good example of an Entity (unique userid, for instance). The Entity has a life-cycle -- it's values can change throughout the course of the action -- and then it's saved or discarded. The Entity I describe above is what I thought Model was supposed to be in MVC? How off-base am I? To clutter things more, you throw in other patterns, such as the Repository pattern (maybe putting a Service in there). It's pretty clear how the Repository would interact with an Entity -- how does it with a Model? Controllers can have multiple Models, which makes it seem like a Model is less a "database table" than it is a unique Entity. So, in very rough terms, which is better? No "Model" really ... class MyController { public function index() { $repo = new PostRepository(); $posts = $repo->findAllByDateRange('within 30 days'); foreach($posts as $post) { echo $post->Author; } } } Or this, which has a Model as the DAO? class MyController { public function index() { $model = new PostModel(); // maybe this returns a PostRepository? $posts = $model->findAllByDateRange('within 30 days'); while($posts->getNext()) { echo $posts->Post->Author; } } } Both those examples didn't even do what I was describing above. I'm clearly lost. Any input?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >