Search Results

Search found 2610 results on 105 pages for 'dna sequence'.

Page 87/105 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Idiomatic way to build a custom structure from XML zipper in Clojure

    - by Checkers
    Say, I'm parsing an RSS feed and want to extract a subset of information from it. (def feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)) I can get links and titles separately: (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text) However I can't figure out the way to extract them at the same time without traversing the zipper more than once, e.g. (let [feed (-> "http://..." clojure.zip/xml-zip clojure.xml/parse)] (zipmap (xml-> feed :channel :item :link text) (xml-> feed :channel :item :title text))) ...or a variation of thereof, involving mapping multiple sequences to a function that incrementally builds a map with, say, assoc. Not only I have to traverse the sequence multiple times, the sequences also have separate states, so elements must be "aligned", so to speak. That is, in a more complex case than RSS, a sub-element may be missing in particular element, making one of sequences shorter by one (there are no gaps). So the result may actually be incorrect. Is there a better way or is it, in fact, the way you do it in Clojure?

    Read the article

  • Flash CS4 [AS3]: Playing Card Deck Array

    - by Ben
    I am looking to make a card game in Flash CS4 using AS3 and I am stuck on the very first step. I have created graphics for a standard 52 card deck of playing cards and imported them into the library in Flash and then proceeded to convert them all to Movie Clips. I have also used the linkage to make them available in the code. The movie clips and the linkage are named in sequence, as in the Ace of Clubs would be C1, two of Diamonds is called D2, Jack of Spades is S11. (C = Clubs, D = Diamonds, S = Spades, H = Hearts and numbers 1 through 13 are the card values. 1 being Ace, 11 being Jack, 12 being Queen, 13 being King). As far as I know my next step would be to arrange the cards into an array. This is the part that I am having problems with. Can someone please point me in the right direction, what would be the best way to do this. Could you provide me with a bit of sample code as well? I have had a look at few tutorials online but they are all telling me different things, some are incomplete and the rest...well...they're just cr*ppy. Thanks in advance! Ben

    Read the article

  • Implementing scroll view that is much larger than the screen view with random images

    - by pulegium
    What I'm trying to do is to implement something like the fruit machine scroll view. Basically I have a sequence of images (randomly generated on the fly) and I want to allow the users to scroll through this line. There can fit approx 10 images on the screen, but the line is virtually unlimited. My question is what would be the best approach in implementing this? So far I've thought of having a UIImageView going across the screen (with width equal to the sum of 10 images) and the image associated with it would be a combination of 12 or so images, with two images falling out of the visible area, this would allow for smooth scrolling. If the user scrolls further, then I would reconstruct the image associated with the view so the new images are appended and the old one's are discarded. This image reconstruction business sounds bit complicated, so I was wondering if there's a more logical way to implement this. There's one more thing, I want to have two lines crossing each other with images, bit like conveyor belts crossing. If that makes any sense... Bit like below: V1 V2 H1 H2 H3 H4 H5 V4 V5 So if the vertical belt is moved it'd be like: V2 H3 H1 H2 V4 H4 H5 V5 V6 H1-H5, V1-V6 being automatically generated images. I'm not asking for the implementation code, just the thoughts on the principles how to implement this. Thanks!! :)

    Read the article

  • How to push a new feature to a central Mercurial repo?

    - by Sly
    I'm assigned the development of a feature for a project. I'm going to work on that feature for several days over a period of a few weeks. I'll clone the central repo. Then I'm going to work locally for 3 weeks. I'll commit my progress to my repo several times during that process. When I'm done, I'm going to pull/merge/commit before I push. What is the right way push my feature as a single changeset to the central repo? I don't want to push 14 "work in progress" changesets and 1 "merged" changeset to the central repo. I want other collaborators on the project to see only one changeset with a significant commit message (such as "Implemented feature ABC"). I'm new to Mercurial and DVCS so don't hesitate to provide guidance if you think I'm not approaching that the right way. <My own answer> So far I came up with a way of reducing 15 changeset to 2 changeset. Suppose changesets 10 to 24 are "work in progress" changesets. I can 'hg collapse -r 10:24 -m "Implemented feature ABC"' (14 changesets collapsed into 1). Then, I must 'hg pull' + 'hg merge' + 'hg commit -m "Merged with most recent changes"'. But now I'm stuck with 2 changesets. I can no longer 'hg collapse', because pull/merge/commit broke my changeset sequence. Of course 2 changesets is better then 15 but still, I'd rather have 1 changeset. </My own answer>

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • Read attributes of MSBuild custom tasks via events in the Logger

    - by gt
    I am trying to write a MSBuild logger module which logs information when receiving TaskStarted events about the Task and its parameters. The build is run with the command: MSBuild.exe /logger:MyLogger.dll build.xml Within the build.xml is a sequence of tasks, most of which have been custom written to compile a (C++ or C#) solution, and are accessed with the following custom Task: <DoCompile Desc="Building MyProject 1" Param1="$(Param1Value)" /> <DoCompile Desc="Building MyProject 2" Param1="$(Param1Value)" /> <!-- etc --> The custom build task DoCompile is defined as: public class DoCompile : Microsoft.Build.Utilities.Task { [Required] public string Description { set { _description = value; } } // ... more code here ... } Whilst the build is running, as each task starts, the logger module receives IEventSource.TaskStarted events, subscribed to as follows: public class MyLogger : Microsoft.Build.Utilities.Logger { public override void Initialize(Microsoft.Build.Framework.IEventSource eventSource) { eventSource.TaskStarted += taskStarted; } private void taskStarted(object sender, Microsoft.Build.Framework.TaskStartedEventArgs e) { // write e.TaskName, attributes and e.Timestamp to log file } } The problem I have is that in the taskStarted() method above, I want to be able to access the attributes of the task for which the event was fired. I only have access to the logger code and cannot change either the build.xml or the custom build tasks. Can anyone suggest a way I can do this?

    Read the article

  • Should I Solve this with Multithreading in Ruby?

    - by viatropos
    I have a strange case, here's the sequence of actions: User edits a document and hits save Application sends GET request to service Service sends POST request back to application in the middle of responding to the GET request Application, in the same state as when it made the GET request, responds to the POST request (sends document data) to service. Service sends data back to Application (responding to original GET request) Application handles the rest... The use case is this: I was thinking how can I make Yahoo Pipes POST data? Specifically, I want it to be able to update Google Docs when a user makes a change locally (on a custom editor). So user edits doc, makes GET request to Yahoo Pipes, Pipes makes a POST request back to App to get the document (Pipes can only make this type of POST request), App sends doc, Pipes formats data according to the Google API, Pipes responds to GET request with Google API formatted XML, App makes the post request. Theoretically, how would I accomplish this? It seems that I need to create a separate ruby Process for the GET request, and when Pipes sends the POST request, I find that process and send its output, then I'm stuck. This would cut out the need for a database for this particular case (I could save the stuff temporarily in a database, but that doesn't seem right). Any ideas? This would make it so I don't have to format things to the Google API in ruby, I could leave that to Pipes.

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • dealing cards in Clojure

    - by Ralph
    I am trying to write a Spider Solitaire player as an exercise in learning Clojure. I am trying to figure out how to deal the cards. I have created (with the help of stackoverflow), a shuffled sequence of 104 cards from two standard decks. Each card is represented as a (defstruct card :rank :suit :face-up) The tableau for Spider will be represented as follows: (defstruct tableau :stacks :complete) where :stacks is a vector of card vectors, 4 of which contain 5 cards face down and 1 card face up, and 6 of which contain 4 cards face down and 1 card face up, for a total of 54 cards, and :complete is an (initially) empty vector of completed sets of ace-king (represented as, for example, king-hearts, for printing purposes). The remainder of the undealt deck should be saved in a ref (def deck (ref seq)) During the game, a tableau may contain, for example: (struct-map tableau :stacks [[AH 2C KS ...] [6D QH JS ...] ... ] :complete [KC KS]) where "AH" is a card containing {:rank :ace :suit :hearts :face-up false}, etc. How can I write a function to deal the stacks and then save the remainder in the ref?

    Read the article

  • Raw types and subtyping

    - by Dmitrii
    We have generic class SomeClass<T>{ } We can write the line: SomeClass s= new SomeClass<String>(); It's ok, because raw type is supertype for generic type. But SomeClass<String> s= new SomeClass(); is correct to. Why is it correct? I thought that type erasure was before type checking, but it's wrong. From Hacker's Guide to Javac When the Java compiler is invoked with default compile policy it performs the following passes: parse: Reads a set of *.java source files and maps the resulting token sequence into AST-Nodes. enter: Enters symbols for the definitions into the symbol table. process annotations: If Requested, processes annotations found in the specified compilation units. attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding. flow: Performs data ow analysis on the trees from the previous step. This includes checks for assignments and reachability. desugar: Rewrites the AST and translates away some syntactic sugar. generate: Generates Source Files or Class Files. Generic is syntax sugar, hence type erasure invoked at 6 pass, after type checking, which invoked at 4 pass. I'm confused.

    Read the article

  • Trigger ad-hoc activity within a workflow

    - by Chris Taylor
    I am looking to use WF 4 to replace an existing workflow solution we have. One feature that is currently used in the existing workflow engine is the ability to cancel a current activity and loopback to a FlowSwitch type activity. So given the following crude workflow where we start at 'O' and base in the input data the workflow follows the path to 'A2' which is currently blocking on s bookmark waiting for input. ---------A1--\ | \ /\ \ O------- ---->--(A2)-------| ^ \/ / | | | / | | ---------A3--/ | | | |----------------------| However in the meantime some out of band data comes in that means we should cancel 'A2' and return to the FlowSwitch to re-evaluate based on the new data. The question is what is the best way to handle the out of band data that arrived? My initial guess is to have a Parallel activity with one branch waiting for out of band data and the other branch containing the workflow sequence described above. If data came in on the brach waiting for the out of band data, how would I cancel the current activity in the workflow and force it to return to the FlowSwitch. Or of course is there a better way to handle this. I have not actually done any work with the WF4 stuff for WF3 for that matter so I might be missing something obvious here.

    Read the article

  • Conversion failed when converting the varchar value to int

    - by onedaywhen
    Microsoft SQL Server 2008 (SP1), getting an unexpected 'Conversion failed' error. Not quite sure how to describe this problem, so below is a simple example. The CTE extracts the numeric portion of certain IDs using a search condition to ensure a numeric portion actually exists. The CTE is then used to find the lowest unused sequence number (kind of): CREATE TABLE IDs (ID CHAR(3) NOT NULL UNIQUE); INSERT INTO IDs (ID) VALUES ('A01'), ('A02'), ('A04'), ('ERR'); WITH ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); The error is, 'Conversion failed when converting the varchar value 'RR' to data type int.' I can't understand why the value ID = 'ERR' should be being considered for conversion because the predicate ID LIKE 'A[0-9][0-9]' should have removed the invalid row from the resultset. When the base table is substituted with an equivalent CTE the problem goes away i.e. WITH IDs (ID) AS ( SELECT 'A01' UNION ALL SELECT 'A02' UNION ALL SELECT 'A04' UNION ALL SELECT 'ERR' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 ); Why would a base table cause this error? Is this a known issue? UPDATE @sgmoore: no, doing the filtering in one CTE and the casting in another CTE still results in the same error e.g. WITH FilteredIDs (ID) AS ( SELECT ID FROM IDs WHERE ID LIKE 'A[0-9][0-9]' ), ValidIDs (ID, seq) AS ( SELECT ID, CAST(RIGHT(ID, 2) AS INTEGER) FROM FilteredIDs ) SELECT MIN(V1.seq) + 1 AS next_seq FROM ValidIDs AS V1 WHERE NOT EXISTS ( SELECT * FROM ValidIDs AS V2 WHERE V2.seq = V1.seq + 1 );

    Read the article

  • Clearly defined Rails routing problem - undefined method for Nil:NilClass

    - by sscirrus
    Guys and girls, I have been working on this problem for a while but still no joy. This is my second question within this general area, because the last question was getting too long and this is now more well-defined. Summary of the Problem: I am loading a page for my customers and I get error: undefined method 'name' for Nil:NilClass My Code #Link on views/users/show.html.erb: <%= link_to "Customer Account", :action => "home", :controller => "customers", :id => @user.user_type_id %> #Regular Route: map.connect 'customers/home/:id', :controller => 'customers', :action => 'home' #Rake Routes, first entry: /customers/home/:id :controller=>:"customers", :action=>"home" #Customers Controller: def home render :layout => 'home' @customer = Customer.find(params[:id]) @user = @current_user_session.user flash[:error] = "Customer not found" and return unless @customer @jobs = @customer.jobs end #views/customers/home.html.erb: <%= @customer.name %> I have absolutely no idea why this seemingly clear sequence of events is resulting in a NilClass. Search the console for Customer.find(2) returns the correct customer. What is this noob missing? Thank you very much.

    Read the article

  • XML/RDF to Java Objects with XSD

    - by bmucklow
    So here's the scenario...I have an XSD file describing all the objects that I need. I can create the objects in Java using JAXB no problem. I have an XML/RDF file that I need to parse into those objects. What is the EASIEST way to do this? I have been looking into Jena and have played around with it, but can't see how to easily map the XML/RDF file to the XSD objects that were generated. Here is a snippet of the XSD file as well as the XML/RDF file: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:a="http://langdale.com.au/2005/Message#" xmlns:sawsdl="http://www.w3.org/ns/sawsdl" targetNamespace="http://iec.ch/TC57/2007/profile#" elementFormDefault="qualified" attributeFormDefault="unqualified" xmlns="http://langdale.com.au/2005/Message#" xmlns:m="http://iec.ch/TC57/2007/profile#"> <xs:annotation/> <xs:element name="Profile" type="m:Profile"/> <xs:complexType name="Profile"> <xs:sequence> <xs:element name="Breaker" type="m:Breaker" minOccurs="0" maxOccurs="unbounded"/> And the XML/RDF: <!-- CIM XML Output For switch783:(295854688) --> <cim:Switch rdf:ID="Switch_295854688"> <cim:IdentifiedObject.mRID>Switch_295854688</cim:IdentifiedObject.mRID> <cim:IdentifiedObject.aliasName>Switch_295854688</cim:IdentifiedObject.aliasName> <cim:ConductingEquipment.phases rdf:resource="http://iec.ch/TC57/2009/CIM-schema-cim14#PhaseCode.ABC" /> <cim:Switch.circuit2>0001406</cim:Switch.circuit2> <cim:Equipment.Line rdf:resource="#Line_0001406" />

    Read the article

  • How to have an iCalendar (RFC 2445) repeat YEARLY with duration

    - by Todd Brooks
    I have been unsuccessful in formulating a RRULE that would allow an event as shown below: Repeats YEARLY, from first Sunday of April to last day of May, occuring on Monday, Wednesday and Friday, until forever. FREQ=YEARLY;BYMONTH=4;BYDAY=SU (gives me the first Sunday of April repeating yearly) and FREQ=YEARLY;BYMONTH=5;BYMONTHDAY=-1 (gives me the last day of May repeating yearly) But I can't figure out how to have the event repeat yearly between those dates for Monday, Wednesday and Friday. Suggestions? Update: Comments don't have enough space to respond to Chris' answer, so I am editing the question with further information. Unfortunately, no. I don't know if it is the DDay.iCal library I'm using, or what, but that doesn't work either. I've found that the date start can't be an ordinal date (first Sunday, etc.)..it has to be a specific date, which makes it difficult for my requirements. Even using multiple RRULE's it doesn't seem to work: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20090717T033307Z DTSTAMP:20090717T033307Z DTSTART:20090101T000000 RRULE:FREQ=YEARLY;WKST=SU;BYDAY=MO,WE,FR;BYMONTH=4,5 RRULE:FREQ=YEARLY;WKST=SU;BYDAY=1SU;BYMONTH=4 RRULE:FREQ=YEARLY;WKST=SU;BYMONTH=5;BYMONTHDAY=-1 SEQUENCE:0 UID:352ed9d4-04d0-4f06-a094-fab7165e5c74 END:VEVENT END:VCALENDAR That looks right on the face (I'm even starting the event on 1/1/2009), but when I start testing whether certain days are valid, I get incorrect results. For example, 4/1/2009 12:00:00 AM = True // Should be False 4/6/2009 12:00:00 AM = True 4/7/2009 12:00:00 AM = False 4/8/2009 12:00:00 AM = True 5/1/2009 12:00:00 AM = True 5/2/2009 12:00:00 AM = False 5/29/2009 12:00:00 AM = True 5/31/2009 12:00:00 AM = True // Should be False 6/1/2009 12:00:00 AM = False I'm using Douglas Day's DDay.iCal software, but I don't think it is a bug in that library. I think this might be a limitation in iCalendar (RFC 2445). Thoughts?

    Read the article

  • How to get associated URLRequest from Event.COMPLETE fired by URLLoader

    - by matt lohkamp
    So let's say we want to load some XML - var xmlURL:String = 'content.xml'; var xmlURLRequest:URLRequest = new URLRequest(xmlURL); var xmlURLLoader:URLLoader = new URLLoader(xmlURLRequest); xmlURLLoader.addEventListener(Event.COMPLETE, function(e:Event):void{ trace('loaded',xmlURL); trace(XML(e.target.data)); }); If we need to know the source URL for that particular XML doc, we've got that variable to tell us, right? Now let's imagine that the xmlURL variable isn't around to help us - maybe we want to load 3 XML docs, named in sequence, and we want to use throwaway variables inside of a for-loop: for(var i:uint = 3; i > 0; i--){ var xmlURLLoader:URLLoader = new URLLoader(new URLRequest('content'+i+'.xml')); xmlURLLoader.addEventListener(Event.COMPLETE, function(e:Event):void{ trace(e.target.src); // I wish this worked... trace(XML(e.target.data)); }); } Suddenly it's not so easy, right? I hate that you can't just say e.target.src or whatever - is there a good way to associate URLLoaders with the URL they loaded data from? Am I missing something? It feels unintuitive to me.

    Read the article

  • How to correctly waitFor() a saveScreenShot() end of execution.

    - by Alain
    Here is my full first working test: var expect = require('chai').expect; var assert = require('assert'); var webdriverjs = require('webdriverjs'); var client = {}; var webdriverOptions = { desiredCapabilities: { browserName: 'phantomjs' }, logLevel: 'verbose' }; describe('Test mysite', function(){ before(function() { client = webdriverjs.remote( webdriverOptions ); client.init(); }); var selector = "#mybodybody"; it('should see the correct title', function(done) { client.url('http://localhost/mysite/') .getTitle( function(err, title){ expect(err).to.be.null; assert.strictEqual(title, 'My title page' ); }) .waitFor( selector, 2000, function(){ client.saveScreenshot( "./ExtractScreen.png" ); }) .waitFor( selector, 7000, function(){ }) .call(done); }); after(function(done) { client.end(done); }); }); Ok, it does not do much, but after working many hours to get the environement correctly setup, it passed. Now, the only way I got it working is by playing with the waitFor() method and adjust the delays. It works, but I still do not understand how to surely wait for a png file to be saved on disk. As I will deal with tests orders, I will eventually get hung up from the test script before securely save the file. Now, How can I improve this screen save sequence and avoid loosing my screenshot ? Thanks.

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • Serializing and deserializing a map with key as string

    - by Grace K
    Hi! I am intending to serialize and deserialize a hashmap whose key is a string. From Josh Bloch's Effective Java, I understand the following. P.222 "For example, consider the case of a harsh table. The physical representation is a sequence of hash buckets containing key-value entries. Which bucket an entry is placed in is a function of the hash code of the key, which is not, in general guaranteed to be the same from JVM implementation to JVM implementation. In fact, it isn't even guranteed to be the same from run to run on the same JVM implementation. Therefore accepting the default serialized form for a hash table would constitute a serious bug. Serializing and deserializing the hash table could yield an object whose invariants were seriously corrupt." My questions are: 1) In general, would overriding the equals and hashcode of the key class of the map resolve this issue and the map can be correctly restored? 2) If my key is a String and the String class is already overriding the hashCode() method, would I still have problem described above. (I am seeing a bug which makes me think this is probably still a problem even though the key is String with overriding hashCode.) 3)Previously, I get around this issue by serializing an array of entries (key, value) and when deserializing I would reconstruct the map. I am wondering if there is a better approach. 4) If the answers to question 1 and 2 are that I still can't be guaranteed. Could someone explain why? If the hashCodes are the same would they go to the same buckets across JVMs? Thanks, Grace

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • How can I have sub-elements of a complex/mixed type with unrestricted order and count?

    - by mbmcavoy
    I am working with XML where some elements will contain text with additional markup. This is similar to this example at W3Schools. However, I need the markup tags to be able to appear in any order and possibly more than once. To modify their example for illustration: <letter> Dear Mr.<name>John Smith</name>. Your order <orderid>1032</orderid> will be shipped on <shipdate>2001-07-13</shipdate>. Thank you, <name>Bob Adams</name> </letter> None of the options presented by W3Schools (on the page following the linked example) allow this XML due to the second <name> element. Their explanation of the "indicators" and my testing are consistent. <xs:sequence> - violates the element order <xs:choice> - more than one kind of element is used. <xs:all> - maxOccurs is restricted to "1". This seems like it should be basic, after all, XHTML allows such things. How do I define my schema to allow this?

    Read the article

  • Prgramming jquery slider

    - by Mirage
    I want to program the jquery slider myself rather than using any plugin. But i want to know the basic idea. e.g I have <ul> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> </ul> I want to show horzontally only three items at one time and there will be arroes left and right end. I know jquery basics. But i don't know how should i do in steps. I mean when click on right arrow The left div should slide left and new div com right should come left ANy ideas in sequence what i need to do

    Read the article

  • Some clarification needed about synchronous versus asynchronous asio operations

    - by Old newbie
    As far as I know, the main difference between synchronous and asynchronous operations. I.e. write() or read() vs async_write() and async_read() is that the former, don't return until the operation finish -or error-, and the last ones, returns inmediately. Due the fact that the asynchronous operations are controlled by an io_service.run() that does not finish until the controlled operations has finalized. It seems to me that in sequencial operations as those involved in TCP/IP connections with protocols such as POP3, in which the operaton is a sequence such as: C: <connect> S: Ok. C: User... S: Ok. C: Password S: Ok. C: Command S: answer C: Command S: answer ... C: bye S: <close> The difference between synchronous/asynchronous opperatons does not make much sense. Of course, in both operations there is allways the risk that the program flow stops indefinitely by some circunstance -there the use of timers-, but I would like know some more authorized opinions in this matter. I must admit that the question is rather ill-defined, but I like hear some advices about when use one or other, because I've problems in debugging with MS Visual Studio, asynchronous SSL operations in a POP3 client in wich I'm working now -about some of who surely I would write here soon-, and sometimes think that perhaps is a bad idea use asynchronous in this. Not to say that I'm an absolute newbie with this librarys, that additionally to the difficult with the idioma, and some obscure concepts in the STL, must suffer the brevity of the asio documentation.

    Read the article

  • mysqldb interfaceError

    - by Johanna
    I have a very weird probleme with mysqldb (mysql module for python). I have a file with queries for inserting records in tables. If I call the functions from the file, it works just fine. But when I try to call one of the functions from another file it throws me a _mysql_exception.InterfaceError: (0, '') I really don't get what I'm doing wrong here.. I call the function from buildDB.py : import create create.newFormat("HD", 0,0,0) The function newFormat(..) is in create.py (imported) : from Database import Database db = Database() def newFormat(name, width=0, height=0, fps=0): format_query = "INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('"+name+"',"+str(width)+","+str(height)+","+str(fps)+");" db.execute(format_query) And the class Databse is the following : import MySQLdb from MySQLdb.constants import FIELD_TYPE class Database(): def __init__(self): server = "localhost" login = "seq" password = "seqmanager" database = "Sequence" my_conv = { FIELD_TYPE.LONG: int } self.conn = MySQLdb.connection(host=server, user=login, passwd=password, db=database, conv=my_conv) # self.cursor = self.conn.cursor() def close(self): self.conn.close() def execute(self, query): self.conn.query(query) (I put only relevant code) Traceback : Z:\sequenceManager\mysql>python buildDB.py D:\ProgramFiles\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWa rning: the sets module is deprecated from sets import ImmutableSet INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('HD',0 ,0,0); Traceback (most recent call last): File "buildDB.py", line 182, in <module> create.newFormat("HD") File "Z:\sequenceManager\mysql\create.py", line 52, in newFormat db.execute(format_query) File "Z:\sequenceManager\mysql\Database.py", line 19, in execute self.conn.query(query) _mysql_exceptions.InterfaceError: (0, '') The warning has never been a problem before so I don't think it's related.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >