Search Results

Search found 6020 results on 241 pages for 'valid'.

Page 187/241 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Email Collector / Implementation

    - by Tian
    I am implementing a simple RoR webpage that collect emails from visitors and store them as objects. I'm using it as a mini-project to try RoR and BDD. I can think of 3 features for Cucumber: 1. User submits a valid email address 2. User submits an existing email address 3. User submits an invalid email My question is, for scenarios 2 and 3, is it better to handle this via the controller? or as methods in a class? Perhaps something that throws errors if an instance is instantiated in sceanrio 2 or 3? Implementation is below, love to hear some code reviews in addition to answers to questions above. Thanks! MODEL: class Contact < ActiveRecord::Base attr_accessor :email end VIEW: <h1>Welcome To My Experiment</h1> <p>Find me in app/views/welcome/index.html.erb</p> <%= flash[:notice] %> <% form_for @contact, :url => {:action => "index"} do |f| %> <%= f.label :email %><br /> <%= f.text_field :email %> <%= submit_tag 'Submit' %> <% end %> CONTROLLER: class WelcomeController < ApplicationController def index @contact = Contact.new unless params[:contact].nil? @contact = Contact.create!(params[:contact]) flash[:notice] = "Thank you for your interest, please check your mailbox for confirmation" end end end

    Read the article

  • Session expiry times?

    - by user246114
    Hi, I've enabled sessions on my app: // appengine-web.xml <sessions-enabled>true</sessions-enabled> they seem to work when I load different pages under my domain. If I close the browser however, looks like the session is terminated. Restarting the browser shows the last session is no longer available. That could be fine, just wondering if this is documented anywhere, so I can rely on this fact? I tried the following just to test if we can tweak it: // in web.xml <session-config> <session-timeout>10</session-timeout> </session-config> also // in my servlet getThreadLocalRequest().getSession().setMaxInactiveInterval(60 * 5); but same behavior, session data is no longer available after browser restart. I looked at the stats for my project and I see data being used for something like "_ah_SESSION" objects. Are those the sessions from above? If so, shouldn't they be cleaned since they're no longer valid? (Hopefully gae takes care of that automatically?) Thanks

    Read the article

  • Hotkeys in webapps

    - by Johoo
    When creating webapps, is there any guidelines on which keys you can use for your own hotkeys without overriding too many of the browsers default hotkeys. For example i might want to have a custom copy command for copying entire sets of data that only makes sense for my program instead of just text. The logical combination for this would be ctrl+c but that would destroy the default copy hotkey for normal text. One solution i was thinking about is to only catch the hotkey when it "makes sense" but when you use some advanced custom selection it might be hard to differentiate if your data is focused, if text is selected or both. Right now i am only using single keys as the hotkey, so just 'c' for the example above and this seems to be what most other sites are doing too. The problem is that if you have text input this doesn't work so good. Is this the best solution? To clarify I'm talking about advanced webapps that behave more like normal programs and not just some website presenting information(even though i think these guidlines would be valid for both cases). So for the copy example it might not be a big deal if you can't copy the text in the menu but when ctrl+tab, alt+d or ctrl+e doesn't work i would be really pissed, cough flash cough.

    Read the article

  • Python Post Upload JPEG to Server?

    - by iJames
    It seems like this answer has been provided a bunch of times but in all of it, I'm still getting errors from the server and I'm sure it has to do with my code. I've tried HTTP, and HTTPConnection from httplib and both create quite different terminal outputs in terms of formatting/encoding so I'm not sure where the problem lies. Does anything stand out here? Or is there just a better way? Pieced together from an ancient article because I really needed to understand the basis of creating the post: http://code.activestate.com/recipes/146306-http-client-to-post-using-multipartform-data/ Note, the jpeg is supposed to be "unformatted". The pseudocode: boundary = "somerandomsetofchars" BOUNDARY = '--' + boundary CRLF = '\r\n' fields = [('aspecialkey','thevalueofthekey')] files = [('Image.Data','mypicture.jpg','/users/home/me/mypicture.jpg')] bodylines = [] for (key, value) in fields: bodylines.append(BOUNDARY) bodylines.append('Content-Disposition: form-data; name="%s"' % key) bodylines.append('') bodylines.append(value) for (key, filename, fileloc) in files: bodylines.append(BOUNDARY) bodylines.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename)) bodylines.append('Content-Type: %s' % self.get_content_type(fileloc)) bodylines.append('') bodylines.append(open(fileloc,'r').read()) bodylines.append(BOUNDARY + '--') bodylines.append('') #print bodylines content_type = 'multipart/form-data; boundary=%s' % BOUNDARY body = CRLF.join(bodylines) #conn = httplib.HTTP("www.ahost.com") # In both this and below, the file part was garbling the rest of the body?!? conn = httplib.HTTPConnection("www.ahost.com") conn.putrequest('POST', "/myuploadlocation/uploadimage") headers = { 'content-length': str(len(body)), 'Content-Type' : content_type, 'User-Agent' : 'myagent' } for headerkey in headers: conn.putheader(headerkey, headers[headerkey]) conn.endheaders() conn.send(body) response = conn.getresponse() result = response.read() responseheaders = response.getheaders() It's interesting in that the real code I've implemented seems to work and is getting back valid responses, but the problem it it's telling me that it can't find the image data. Maybe this is particular to the server, but I'm just trying to rule out that I'm not doing some thing exceptionally stupid here. Or perhaps there's other methodologies for doing this more efficiently. I've not tried poster yet because I want to make sure I'm formatting the POST correctly first. I figure I can upgrade to poster after it's working yes?

    Read the article

  • NSDate out of scope

    - by therealtkd
    Having problems with out of scope for NSDate in an iphone app. I have an interface defined like this: @interface MyObject : NSoObject { NSMutableArray *array; BOOL checkThis; NSDate *nextDue; } Now in the implementation I have this: -(id) init { if( (self=[super init]) ) { checkThis = NO; array = [[NSMutableArray alloc] init]; nextDue = [[NSDate date] retain]; NSDate *testDate = [NSDate date]; } return self; } Now, if I trace through the init, before I actually assign the variables checkThis shows as boolean. array shows as pointer 0x0 because it hasn't ben assigned. But the nextDue is showing as 'out of scope'. I don't understand why this is out of scope but the other variables aren't. If I trace through the code until after the variables are assigned, array now shows as being correctly assigned but nextDue is still out of scope. Interestingly, the testDate variable is assigned just fine and the debugger shows this as a valid date. Further interesting point is if I move the mouse over the testDate variable while I am debugging, it shows as an 'NSDate *' type which I would expect since that's its definition. Yet the nextDue, which to me is defined the same way is showing as a '_NSCFDate *'. Any googling I did on the subject said that the retain is the problem, but its actually out of scope before I even try to assign the variable. However, in another class, the same definition for NSDate work ok. It shows as nil before a value is assigned to it. Arghhh

    Read the article

  • Using authsmtp from a Grails server

    - by Simon
    This is quite a specific question, and I have had no luck on the grails nabble forum, so I thought I would post here. I am using the grails mail plug-in, but I think my question is a general one about using authsmtp as an email gateway from my server. I am having trouble sending mail from my app using authsmtp. I have installed and configured the mail plugin and was originally using my ISP's SMTP server to send mails. However when I deployed to AWS EC2 this failed because my elastic IP was blocked by the SMTP host. So I bought myself an authsmtp account and set up my server email address as an accepted one at authsmtp. I then changed my configuration in SecurityConfig.groovy to point to the authsmtp server that I had been designated... mailHost = "mail.authsmtp.com" mailUsername = "myusername" mailPassword = "mypassword" mailProtocol = "smtp" mailFrom = "[email protected]" mailPort = 2525 ...and I'm just trying to get this to work locally before I deploy back up to AWS. Sending mail fails and in my log I have this exception: 2010-02-13 10:59:44,218 [http-8080-1] ERROR service.EmailerService - Failed to send emails: Failed messages: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. org.springframework.mail.MailSendException; nested exception details (1) are: Failed message 1: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. at com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:1388) at com.sun.mail.smtp.SMTPTransport.mailFrom(SMTPTransport.java:959) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:583) I'm a bit lost since the username and password I provide in the configuration are definitely correct. A terse and not very helpful conversation with authsmtp support suggests that I need to MD5 and/or base64 encode my credentials before sending, so my question is in three parts... 1) any idea what's going on with the failure and why that message is appearing? 2) how would I encode the credentials to pass to authsmtp and how would I configure that for the mail plugin 3) has anyone successfully connected and sent mail through authsmtp from the mail plugin and specifically from AWS EC2?

    Read the article

  • Zend_Validate_Abstract custom validator not displaying correct error messages.

    - by Jeremy Dowell
    I have two text fields in a form that I need to make sure neither have empty values nor contain the same string. The custom validator that I wrote extends Zend_Validate_Abstract and works correctly in that it passes back the correct error messages. In this case either: isEmpty or isMatch. However, the documentation says to use addErrorMessages to define the correct error messages to be displayed. in this case, i have attached ->addErrorMessages(array("isEmpty"=>"foo", "isMatch"=>"bar")); to the form field. According to everything I've read, if I return "isEmpty" from isValid(), my error message should read "foo" and if i return "isMatch" then it should read "bar". This is not the case I'm running into though. If I return false from is valid, no matter what i set $this-_error() to be, my error message displays "foo", or whatever I have at index[0] of the error messages array. If I don't define errorMessages, then I just get the error code I passed back for the display and I get the proper one, depending on what I passed back. How do I catch the error code and display the correct error message in my form? The fix I have implemented, until I figure it out properly, is to pass back the full message as the errorcode from the custom validator. This will work in this instance, but the error message is specific to this page and doesn't really allow for re-use of code. Things I have already tried: I have already tried validator chaining so that my custom validator only checks for matches: ->setRequired("true") ->addValidator("NotEmpty") ->addErrorMessage("URL May Not Be Empty") ->addValidator([*customValidator]*) ->addErrorMessage("X and Y urls may not be the same") But again, if either throws an error, the last error message to be set displays, regardless of what the error truly is. I'm not entirely sure where to go from here. Any suggestions?

    Read the article

  • Confused about definition of a 'median' when constructing a kd-Tree

    - by user352636
    Hi there. Im trying to build a kd-tree for searching through a set of points, but am getting confused about the use of 'median' in the wikipedia article. For ease of use, the wikipedia article states the pseudo-code of kd-tree construction as: function kdtree (list of points pointList, int depth) { if pointList is empty return nil; else { // Select axis based on depth so that axis cycles through all valid values var int axis := depth mod k; // Sort point list and choose median as pivot element select median by axis from pointList; // Create node and construct subtrees var tree_node node; node.location := median; node.leftChild := kdtree(points in pointList before median, depth+1); node.rightChild := kdtree(points in pointList after median, depth+1); return node; } } I'm getting confused about the "select median..." line, simply because I'm not quite sure what is the 'right' way to apply a median here. As far as I know, the median of an odd-sized (sorted) list of numbers is the middle element (aka, for a list of 5 things, element number 3, or index 2 in a standard zero-based array), and the median of an even-sized array is the sum of the two 'middle' elements divided by two (aka, for a list of 6 things, the median is the sum of elements 3 and 4 - or 2 and 3, if zero-indexed - divided by 2.). However, surely that definition does not work here as we are working with a distinct set of points? How then does one choose the correct median for an even-sized list of numbers, especially for a length 2 list? I appreciate any and all help, thanks! -Stephen

    Read the article

  • std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

    - by fixermark
    I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding. Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program: #include <locale> int main ( int argc, char *argv [] ) { std::locale::global(std::locale("")); return 0; } This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration. My questions are: Is this expected behavior for this code on MacOS 10.6? What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library. For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode. edit The error given by the above code on 10.6.1 is: $ ./locale terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid Abort trap

    Read the article

  • Find all records in database that are within a certain distance of a set of lat and long points

    - by Mike L
    I've seen all the examples and here's what I got so far. my table is simple: schools (table name) - School_ID - lat - long - county - extrainfo here's my code: <?php $con = mysql_connect("xxx","xxx","xxx"); if (!$con) { die('Could not connect: ' . mysql_error()); } else {} mysql_select_db("xxx", $con); $latitude = "36.265541"; $longitude = "-119.207153"; $distance = "1"; //miles $qry = "SELECT *, (3958.75 * ACOS(SIN(" . $latitude . " / 57.2958)*SIN(lat / 57.2958)+COS(" . $latitude . " / 57.2958)*COS(lat / 57.2958)*COS(long / 57.2958 - " . $longitude . " / 57.2958))) as distance FROM schools WHERE (3958.75 * ACOS(SIN(" . $latitude . " / 57.2958)*SIN(lat / 57.2958)+COS(" . $latitude . " / 57.2958)*COS(lat / 57.2958)*COS(long / 57.2958 - " . $longitude . " / 57.2958))) <= " . $distance; $results = mysql_query($qry); if (mysql_num_rows($results) > 0) { while($row = mysql_fetch_assoc($results)) { print_r($row); } } else {} mysql_close($con); ?> but I get this error when I try to run it: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource

    Read the article

  • Converting Makefile to Visual Studio Terminology Questions (First time using VS)

    - by Ukko
    I am an old Unix guy who is converting a makefile based project over to Microsoft Visual Studio, I got tasked with this because I understand the Makefile which chokes VS's automatic import tools. I am sure there is a better way than what I am doing but we are making things fit into the customer's environment and that is driving my choices. So gmake is not a valid answer, even if it is the right one ;-) I just have a couple of questions on terminology that I think will be easy for an experienced (or junior) user to answer. Presently, a make all will generate several executables and a shared library. How should I structure this? Is it one "solution" with multiple projects? There is a body of common code (say 50%) that is shared between the various executable targets that is not in a formal library, if that matters. I thought I could just set up the first executable and then add targets for the others, but that does not seem to work. I know I am working against the tool, so what is the right way? I am also using Visual C++ 2010 Express to try and do this so that may also be a problem if support for multiple targets is not supported without using Visual C++ 2010 (insert superlative). Thanks, this is really one of those questions that should be answerable by a quick chat with the resident Windows Developer at the water cooler. So, I am asking at the virtual water cooler, I also spring for a virtual frosty beverage after work.

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • Enumerating a string

    - by JamesB
    I have a status which is stored as a string of a set length, either in a file or a database. I'm looking to enumerate the possible status' I have the following type to define the possible status' Type TStatus = (fsNormal = Ord('N'),fsEditedOnScreen = Ord('O'), fsMissing = Ord('M'),fsEstimated = Ord('E'),fsSuspect = Ord('s'), fsSuspectFromOnScreen = Ord('o'),fsSuspectMissing = Ord('m'), fsSuspectEstimated = Ord('e')); Firstly is this really a good idea? or should I have a seperate const array storing the char conversions? That would mean more than one place to update. Now convert a string to a status array I have the following, but how can I check if a char is valid without looping through the enumeration? Function StrToStatus(Value : String):TStatusArray; var i: Integer; begin if Trim(Value) = '' then begin SetLength(Result,0); Exit; end; SetLength(Result,Length(Value)); for i := 1 to Length(Value) do begin Result[i] := TStatus(Value[i]); // I don't think this line is safe. end; end; AFAIK this should be fine for converting back again. Function StatusToStr(Value : TStatusArray):String; var i: Integer; begin for i := 0 to Length(Value) - 1 do Result := Result + Chr(Ord(Value[i])) end; I'm using Delphi 2007

    Read the article

  • Detecting Singularities in a Graph

    - by nasufara
    I am creating a graphing calculator in Java as a project for my programming class. There are two main components to this calculator: the graph itself, which draws the line(s), and the equation evaluator, which takes in an equation as a String and... well, evaluates it. To create the line, I create a Path2D.Double instance, and loop through the points on the line. To do this, I calculate as many points as the graph is wide (e.g. if the graph itself is 500px wide, I calculate 500 points), and then scale it to the window of the graph. Now, this works perfectly for most any line. However, it does not when dealing with singularities. If, when calculating points, the graph encounters a domain error (such as 1/0), the graph closes the shape in the Path2D.Double instance and starts a new line, so that the line looks mathematically correct. Example: However, because of the way it scales, sometimes it is rendered correctly, sometimes it isn't. When it isn't, the actual asymptotic line is shown, because within those 500 points, it skipped over x = 2.0 in the equation 1 / (x-2), and only did x = 1.98 and x = 2.04, which are perfectly valid in that equation. Example: In that case, I increased the window on the left and right one unit each. My question is: Is there a way to deal with singularities using this method of scaling so that the resulting line looks mathematically correct? I myself have thought of implementing a binary search-esque method, where, if it finds that it calculates one point, and then the next point is wildly far away from the last point, it searches in between those points for a domain error. I had trouble figuring out how to make it work in practice, however. Thank you for any help you may give!

    Read the article

  • How do I create JavaScript escape sequences in PHP?

    - by ordinarytoucan
    I'm looking for a way to create valid UTF-16 JavaScript escape sequence characters (including surrogate pairs) from within PHP. I'm using the code below to get the UTF-32 code points (from a UTF-8 encoded character). This works as JavaScript escape characters (eg. '\u00E1' for 'á') - until you get into the upper ranges where you get surrogate pairs (eg '??' comes out as '\u1D715' but should be '\uD835\uDF15')... function toOrdinal($chr) { if (ord($chr{0}) >= 0 && ord($chr{0}) <= 127) { return ord($chr{0}); } elseif (ord($chr{0}) >= 192 && ord($chr{0}) <= 223) { return (ord($chr{0}) - 192) * 64 + (ord($chr{1}) - 128); } elseif (ord($chr{0}) >= 224 && ord($chr{0}) <= 239) { return (ord($chr{0}) - 224) * 4096 + (ord($chr{1}) - 128) * 64 + (ord($chr{2}) - 128); } elseif (ord($chr{0}) >= 240 && ord($chr{0}) <= 247) { return (ord($chr{0}) - 240) * 262144 + (ord($chr{1}) - 128) * 4096 + (ord($chr{2}) - 128) * 64 + (ord($chr{3}) - 128); } elseif (ord($chr{0}) >= 248 && ord($chr{0}) <= 251) { return (ord($chr{0}) - 248) * 16777216 + (ord($chr{1}) - 128) * 262144 + (ord($chr{2}) - 128) * 4096 + (ord($chr{3}) - 128) * 64 + (ord($chr{4}) - 128); } elseif (ord($chr{0}) >= 252 && ord($chr{0}) <= 253) { return (ord($chr{0}) - 252) * 1073741824 + (ord($chr{1}) - 128) * 16777216 + (ord($chr{2}) - 128) * 262144 + (ord($chr{3}) - 128) * 4096 + (ord($chr{4}) - 128) * 64 + (ord($chr{5}) - 128); } } How do I adapt this code to give me proper UTF-16 code points? Thanks!

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • CSS styles are not applied to elements added to JavaFX component tree

    - by pazabo
    I have applied CSS style to JavaFX components and it looks like everything is working fine except one situation: when I add JavaFX components to component tree on-the-fly their CSS styles are not applied. For example following code: package test; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.shape.Rectangle; import javafx.scene.input.MouseEvent; import javafx.util.Math; import javafx.scene.paint.Color; function getRect(): Rectangle { return Rectangle { x: 230 * Math.random() y: 60 * Math.random() width: 20, height: 20 styleClass: "abc" } } def stage: Stage = Stage { scene: Scene { width: 250, height: 80 stylesheets: "{__DIR__}main.css" content: [ Rectangle { x: 0, y: 0, width: 250, height: 80 fill: Color.WHITE onMouseClicked: function (evt: MouseEvent): Void { insert getRect() into stage.scene.content; } } getRect() ] } } with following stylesheet: .abc { fill: red; } in main.css file (both in test package) display red square on white background, but after clicking the main rectangle black (not red) squares are added to scene. I noticed that: Components added dynamically look just like style information was not applied. If you set their style in JavaFX code then everything works fine. After changing stylesheets property (so that it points to another valid stylesheet) the objects already added render properly. Does anyone know the solution to this problem? I could of course put all the properties into JavaFX code or provide another stylesheet (for every existing stylesheed) that would contain the same data and change stylesheet right after adding any component, but I would like to find some elegant solution. Thanks in advance.

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • How can I build the Boost.Python example on Ubuntu 9.10?

    - by Gatlin
    I am using Ubuntu 9.10 beta, whose repositories contain boost 1.38. I would like to build the hello-world example. I followed the instructions here (http://www.boost.org/doc/libs/1%5F40%5F0/libs/python/doc/tutorial/doc/html/python/hello.html), found the example project, and issued the "bjam" command. I have installed bjam and boost-build. I get the following output: Jamroot:18: in modules.load rule python-extension unknown in module Jamfile</usr/share/doc/libboost1.38-doc/examples/libs/python/example>. /usr/share/boost-build/build/project.jam:312: in load-jamfile /usr/share/boost-build/build/project.jam:68: in load /usr/share/boost-build/build/project.jam:170: in project.find /usr/share/boost-build/build-system.jam:248: in load /usr/share/boost-build/kernel/modules.jam:261: in import /usr/share/boost-build/kernel/bootstrap.jam:132: in boost-build /usr/share/doc/libboost1.38-doc/examples/libs/python/example/boost-build.jam:7: in module scope I do not know enough about Boost (this is an exploratory exercise for myself) to understand why the python-extension macro in the included Jamroot is not valid. I am running this example from the install directory, so I have not altered the Jamroot's use-project setting. As a side question, if I were to just willy-nilly start a project in an arbitrary directory, how would I write my jamroot?

    Read the article

  • How to get error text in controller from BindingResult

    - by Mike
    I have an controller that returns JSON. It takes a form, which validates itself via spring annotations. I can get FieldError list from BindingResult, but they don't contain the text that a JSP would display in the tag. How can I get the error text to send back in JSON? @RequestMapping(method = RequestMethod.POST) public @ResponseBody JSONResponse submit(@Valid AnswerForm answerForm, BindingResult result, Model model, HttpServletRequest request, HttpServletResponse response) { if (result.hasErrors()) { response.setStatus(HttpServletResponse.SC_BAD_REQUEST); JSONResponse r = new JSONResponse(); r.setStatus(JSONResponseStatus.ERROR); //HOW DO I GET ERROR MESSAGES OUT OF BindingResult??? } else { JSONResponse r = new JSONResponse(); r.setStatus(JSONResponseStatus.OK); return r; } } JSONREsponse class is just a POJO public class JSONResponse implements Serializable { private JSONResponseStatus status; private String error; private Map<String,String> errors; private Map<String,Object> data; ...getters and setters... } Calling BindingResult.getAllErrors() returns an array of FieldError objects, but it doesn't have the actual errors.

    Read the article

  • Reading and writing XML over an SslStream

    - by Mark
    I want to read and write XML data over an SslStream. The data (written and read) consists of objects serialized by an XmlSerializer. I have tried the following; (left some details out for clarity!) TcpClient tcpClient = new TcpClient(server, port); SslStream sslStream = new SslStream(tcpClient.GetStream(), true, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); sslStream.AuthenticateAsClient(server); XmlReader xmlReader = XmlReader.Create(sslStream,readerSettings); XmlWriter xmlWriter = XmlWriter.Create(sslStream,writerSettings); myClass c = new myClass (); XmlSerializer serializer = new XmlSerializer(typeof(myClass)); serializer.Serialize(xmlWriter, c); myClass c2 = (myClass )serializer.Deserialize(xmlReader); First of all, it appears that writing to the stream succeeds. But the XmlSerializer throws an error because of invalid XML. It appears that the first character read from the stream is '00' or a null-character. (I have googled this problem, and see a million people with the same 'problem', but no viable solution.) I can work around this 'issue' by using a StreamReader, read everything into a string and then use that string as input for another stream that the serializer can use. (Very dirty, but it works.) Second problem is, that when I try to use the same SslStream to write a second request, I do not get a response from the Reader. (The server DOES send a valid XML response though!) So the serializer.Serialize(xmlWriter, c3) works, but reading from the stream yields no results. I have tried several different classes that implement Stream. (StreamReader, XmlTextReader, etc.) Anyone has an idea how reading and writing XML data to and from an SslStream is supposed to work? Thanks in advance!

    Read the article

  • SQL Server - Complex Dynamic Pivot columns

    - by user972255
    I have two tables "Controls" and "ControlChilds" Parent Table Structure: Create table Controls( ProjectID Varchar(20) NOT NULL, ControlID INT NOT NULL, ControlCode Varchar(2) NOT NULL, ControlPoint Decimal NULL, ControlScore Decimal NULL, ControlValue Varchar(50) ) Sample Data ProjectID | ControlID | ControlCode | ControlPoint | ControlScore | ControlValue P001 1 A 30.44 65 Invalid P001 2 C 45.30 85 Valid Child Table Structure: Create table ControlChilds( ControlID INT NOT NULL, ControlChildID INT NOT NULL, ControlChildValue Varchar(200) NULL ) Sample Data ControlID | ControlChildID | ControlChildValue 1 100 Yes 1 101 No 1 102 NA 1 103 Others 2 104 Yes 2 105 SomeValue Output should be in a single row for a given ProjectID with all its Control values first & followed by child control values (based on the ControlCode (i.e.) ControlCode_Child (1, 2, 3...) and it should look like this Also, I tried this PIVOT query and I am able to get the ChildControls table values but I dont know how to get the Controls table values. DECLARE @cols AS NVARCHAR(MAX); DECLARE @query AS NVARCHAR(MAX); select @cols = STUFF((SELECT distinct ',' + QUOTENAME(ControlCode + '_Child' + CAST(ROW_NUMBER() over(PARTITION BY ControlCode ORDER BY ControlChildID) AS Varchar(25))) FROM Controls C INNER JOIN ControlChilds CC ON C.ControlID = CC.ControlID FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') , 1, 1, ''); SELECT @query ='SELECT * FROM ( SELECT (ControlCode + ''_Child'' + CAST(ROW_NUMBER() over(PARTITION BY ControlCode ORDER BY ControlChildID) AS Varchar(25))) As Code, ControlChildValue FROM Controls AS C INNER JOIN ControlChilds AS CC ON C.ControlID = CC.ControlID ) AS t PIVOT ( MAX(ControlChildValue) FOR Code IN( ' + @cols + ' )' + ' ) AS p ; '; execute(@query); Output I am getting: Can anyone please help me on how to get the Controls table values in front of each ControlChilds table values?

    Read the article

  • I'm making a simulated tv

    - by Jam
    I need to make a tv that shows the user the channel and the volume, and shows whether or not the television is on. I have the majority of the code made, but for some reason the channels won't switch. I'm fairly unfamiliar with how properties work, and I think that's what my problem here is. Help please. class Television(object): def __init__(self, __channel=1, volume=1, is_on=0): self.__channel=__channel self.volume=volume self.is_on=is_on def __str__(self): if self.is_on==1: print "The tv is on" print self.__channel print self.volume else: print "The television is off." def toggle_power(self): if self.is_on==1: self.is_on=0 return self.is_on if self.is_on==0: self.is_on=1 return self.is_on def get_channel(self): return channel def set_channel(self, choice): if self.is_on==1: if choice>=0 and choice<=499: channel=self.__channel else: print "Invalid channel!" else: print "The television isn't on!" channel=property(get_channel, set_channel) def raise_volume(self, up=1): if self.is_on==1: self.volume+=up if self.volume>=10: self.volume=10 print "Max volume!" else: print "The television isn't on!" def lower_volume(self, down=1): if self.is_on==1: self.volume-=down if self.volume<=0: self.volume=0 print "Muted!" else: print "The television isn't on!" def main(): tv=Television() choice=None while choice!="0": print \ """ Television 0 - Exit 1 - Toggle Power 2 - Change Channel 3 - Raise Volume 4 - Lower Volume """ choice=raw_input("Choice: ") print if choice=="0": print "Good-bye." elif choice=="1": tv.toggle_power() tv.__str__() elif choice=="2": change=raw_input("What would you like to change the channel to?") tv.set_channel(change) tv.__str__() elif choice=="3": tv.raise_volume() tv.__str__() elif choice=="4": tv.lower_volume() tv.__str__() else: print "\nSorry, but", choice, "isn't a valid choice." main() raw_input("Press enter to exit.")

    Read the article

  • Using Rails, problem testing has_many relationship

    - by east
    The summary is that I've code that works when manually testing, but isn't doing what I would think it should when trying to build an automated test. Here are the details: I've two models: Payment and PaymentTranscation. class Payment ... has_many :transactions, :class_name => 'PaymentTransaction' class PaymentTranscation ... belongs_to payment The PaymentTransaction is only created in a Payment model method, like so: def pay_up ... transactions.create!(params...) ... end I've manually tested this code, inspected the database, and everything works well. The failing automated test looks like this: def test_pay_up purchase = Payment.new(...) assert purchase.save assert_equal purchase.state, :initialized.to_s assert purchase.pay_up # this should create a new PaymentTransaction... assert_equal purchase.state, :succeeded.to_s assert_equal purchase.transactions.count, 1 # FAILS HERE; transactions is an empty array end If I step through the code, it's clear that the PaymentTransaction is getting created correctly (though I can't see it in the database because everything is in a testing transaction). What I can't figure out is why transactions is returning an empty array in the test when I know a valid PaymentTransaction is getting created. Anybody have some suggestions? Thanks in advance, east

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >