Search Results

Search found 22653 results on 907 pages for 'robert may'.

Page 38/907 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • Agile Documentation

    - by Nick Harrison
    We all know that one of the premises of the agile manifesto is to value Working Software over Comprehensive Documentation. This is a wonderful idea and it takes a tremendous burden off of project implementations. I have seen as many projects fail because of the maintenance weight of the project documentations as I have for any reason. But this goal as important as it is may not always be practical. Sometimes the client will simply insist on tedious documentation despite the arguments against it. This may be to calm a nervous client. This may be to satisfy an audit / compliance requirement. This may be a non-too subtle attempt at sabotaging the project. Ok, it is probably not an all out attempt to sabotage the project, but it will probably feel that way. So what can we do to keep to the spirit of the Agile Manifesto but still meet the needs of the client wanting the documentation? This is a good question that I have been puzzling over lately! I hope to explore some possible answers more fully here. A common theme that my solutions are likely to follow is the same theme that I often follow with simplifying complex business logic. Make it table driven! My thought is that the sought after documentation could be a report or reports out of a metadata repository. Reports are much easier to maintain than hand written documentation. Here are a few additional advantages that we can explore over time: Reports will take advantage of the fact that different people have different needs and different format requirements Reports and the supporting metadata are more easily validated and the validation can be automated. If the application itself uses this metadata than there never has to be a question as to whether or not the metadata is up to date. It is up to date or the application would not work. In many cases we should be able to automatically gather most of the Meta data that we need using reflection, system tables, etc. I think that this will lower the total cost of ownership for the documentation and may provide something useful beyond having a pretty document to look at.  What are your thoughts?

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • Cannot install "ATI/AMD proprietary FGLRX graphics driver" (SystemError)

    - by Fisherman John
    I have just installed a fresh copy of 12.04.1 64bit. I formatted my PC completely and enabled updates during the installation. After the installation was complete, I went on updating my software. However, when I wanted to install the additional drivers using the Additional Drivers tool (namely "ATI/AMD proprietary FGLRX graphics driver"), it gave me this error: SystemError: E:Unable to correct problems, you have held broken packages. The same error shows up if I try installing the post-release updates driver. Installing the drivers from the terminal results in this output: XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." I get this using "sudo apt-get update": http://pastebin.com/AWAtDXjY But "sudo apt-get install fglrx" still get me this error: http://pastebin.com/RYM55bVN & "sudo apt-get -f install fglrx" gives me this error: http://pastebin.com/xxekajvP Any help would be greatly appreciated. (Please note that I'm new to Linux, coming directly from Windows. I have tried Ubuntu twice or so before, but it was not for a long period of time. The drivers got installed smoothly the few times I've tried Ubuntu, but post-release updates never worked for me.) [I am going to work now, so I can only answer from my phone. Can't really test any new solutions you may give me until ~10 hours from now on. Maybe more.] @stonedsquirrel When I try to run that command, I get this error: "XXXXXX:~$ sudo apt-get install fglrx [sudo] password for XXXXX: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: fglrx : Depends: lib32gcc1 but it is not going to be installed Depends: libc6-i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages." ie. I get the same error. ( I am Fisherman John, dunno how to login & thereby respond to your comment again _ )

    Read the article

  • Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URL to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - starts parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you would like to cover even more links, but I've decided to omit other characters because I can't remember a link that would start with some other character 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces was present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: ${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending} I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like for my links to be replaced with shortenings as well. So when user copies a very long links (for instance if they would copy a link from google maps that usually generates long links I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. Does the replace string support notations like that so I can stil use a singe Regex.Replace() call?

    Read the article

  • Advanced Regex: Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URLs to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - start parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you'd like to cover even more links, but I've decided not to because I can't think of a link that would start with some obscure character) 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces were present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}" I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like my links to be replaced with shortenings as well. So when user copies a very long link (for instance if they would copy a link from google maps that usually generates long links) I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. I could as well append ellipsis at the end of at all possible (and make things even more perfect). Does Regex.Replace() method support replacement notations so that I can still use a single call? Something similar as string.Format() method does when you'd like to format values in string format (decimals, dates etc...).

    Read the article

  • PHP SASL(PECL) sasl_server_init(app) works with CLI but not with ApacheModule

    - by ZokRadonh
    I have written a simple auth script so that Webusers can type in their username and password and my PHP script verifies them by SASL. The SASL Library is initialized by php function sasl_server_init("phpfoo"). So phpfoo.conf in /etc/sasl2/ is used. phpfoo.conf: pwcheck_method: saslauthd mech_list: PLAIN LOGIN log_level: 9 So the SASL library now tries to connect to saslauthd process by socket. saslauthd command line looks like this: /usr/sbin/saslauthd -r -V -a pam -n 5 So saslauthd uses PAM to authenticate. In the php script I have created sasl connection by sasl_server_new("php", null, "myRealm"); The first argument is the servicename. So PAM uses the file /etc/pam.d/php to see for further authentication information. /etc/pam.d/php: auth required pam_mysql.so try_first_pass=0 config_file=/etc/pam.d/mysqlconf.nss account required pam_permit.so session required pam_permit.so mysqlconf.nss has all information that is needed for a useful MySQL Query to user table. All of this works perfectly when I run the script by command line. php ssasl.php But when I call the same script via webbrowser(php apache module) I get an -20 return code (SASL_NOUSER). In /var/log/messages there is May 18 15:27:12 hostname httpd2-prefork: unable to open Berkeley db /etc/sasldb2: No such file or directory I do not have anything with a Berkeley db for authentication with SASL. I think authentication using /etc/sasldb2 is the default setting. In my opinion it does not read my phpfoo.conf file. For some reason the php-apache-module ignores the parameter in sasl_server_init("phpfoo"). My first thought was that there is a permission issue. So back in shell: su -s /bin/bash wwwrun php ssasl.php "Authentication successful". - No file-permission issue. In the source of the sasl-php-extension we can find: PHP_FUNCTION(sasl_server_init) { char *name; int name_len; if (zend_parse_parameters(1 TSRMLS_CC, "s", &name, &name_len) == FAILURE) { return; } if (sasl_server_init(NULL, name) != SASL_OK) { RETURN_FALSE; } RETURN_TRUE; } This is a simple pass through of the string. Are there any differences between the PHP CLI and PHP ApacheModule version that I am not aware of? Anyway, there are some interesting log entries when I run PHP in CLI mode: May 18 15:44:48 hostname php: SQL engine 'mysql' not supported May 18 15:44:48 hostname php: auxpropfunc error no mechanism available May 18 15:44:48 hostname php: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: sqlite May 18 15:44:48 hostname php: sql_select option missing May 18 15:44:48 hostname php: auxpropfunc error no mechanism available May 18 15:44:48 hostname php: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: sql Those lines are followed by lines of saslauthd and PAM which results in authentication success.(I do not get any of them in ApacheModule mode) Looks like that he is trying auxprop pwcheck before saslauthd. I have no other .conf file in /etc/sasl2. When I change the parameter of sasl_server_init to something other then I get the same error in CLI mode as in ApacheModule mode.

    Read the article

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • 7u10: JavaFX packaging tools update

    - by igor
    Last weeks were very busy here in Oracle. JavaOne 2012 is next week. Come to see us there! Meanwhile i'd like to quickly update you on recent developments in the area of packaging tools. This is an area of ongoing development for the team, and we are  continuing to refine and improve both the tools and the process. Thanks to everyone who shared experiences and suggestions with us. We are listening and fixed many of reported issues. Please keep them coming as comments on the blog or (even better) file issues directly to the JIRA. In this post i'll focus on several new packaging features added in JDK 7 update 10: Self-Contained Applications: Select Java Runtime to bundle Self-Contained Applications: Create Package without Java Runtime Self-Contained Applications: Package non-JavaFX application Option to disable proxy setup in the JavaFX launcher Ability to specify codebase for WebStart application Option to update existing jar file Self-Contained Applications: Specify application icon Self-Contained Applications: Pass parameters on the command line All these features and number of other important bug fixes are available in the developer preview builds of JDK 7 update 10 (build 8 or later). Please give them a try and share your feedback! Self-Contained Applications: Select Java Runtime to bundle Packager tools in 7u6 assume current JDK (based on java.home property) is the source for embedded runtime. This is useful simplification for many scenarios but there are cases where ability to specify what to embed explicitly is handy. For example IDE may be using fixed JDK to build the project and this is not the version you want to bundle into your application. To make it more flexible we now allow to specify location of base JDK explicitly. It is optional and if you do not specify it then current JDK will be used (i.e. this change is fully backward compatible). New 'basedir' attribute was added to <fx:platform> tag. Its value is location of JDK to be used. It is ok to point to either JRE inside the JDK or JDK top level folder. However, it must be JDK and not JRE as we need other JDK tools for proper packaging and it must be recent version of JDK that is bundled with JavaFX (i.e. Java 7 update 6 or later). Here are examples (<fx:platform> is part of <fx:deploy> task): <fx:platform basedir="${java.home}"/> <fx:platform basedir="c:\tools\jdk7"/> Hint: this feature enables you to use packaging tools from JDK 7 update 10 (and benefit from bug fixes and other features described below) to create application package with bundled FCS version of JRE 7 update 6. Self-Contained Applications: Create Package without Java Runtime This may sound a bit puzzling at first glance. Package without embedded Java Runtime is not really self-contained and obviously will not help with: Deployment on fresh systems. JRE need to be installed separately (and this step will require admin permissions). Possible compatibility issues due to updates of system runtime. However, these packages are much much smaller in size. If download size matters and you are confident that user have recommended system JRE installed then this may be good option to consider if you want to improve user experience for install and launch. Technically, this is implemented as an extension of previous feature. Pass empty string as value for 'basedir' attribute and this will be treated as request to not bundle Java runtime, e.g. <fx:platform basedir=""/> Self-Contained Applications: Package non-JavaFX application One of popular questions people ask about self-contained applications - can i package my Java application as self-contained application? Absolutely. This is true even for tools shipped with JDK 7 update 6. Simply follow steps for creating package for Swing application with integrated JavaFX content and they will work even if your application does not use JavaFX. What's wrong with it? Well, there are few caveats: bundle size is larger because JavaFX is bundled whilst it is not really needed main application jar needs to be packaged to comply to JavaFX packaging requirements(and this may be not trivial to achieve in your existing build scripts) javafx application launcher may not work well with startup logic of your application (for example launcher will initialize networking stack and this may void custom networking settings in your application code) In JDK 7 update 6 <fx:deploy> was updated to accept arbitrary executable jar as an input. Self-contained application package will be created preserving input jar as-is, i.e. no JavaFX launcher will be embedded. This does not help with first point above but resolves other two. More formally following assertions must be true for packaging to succeed: application can be launched as "java -jar YourApp.jar" from the command line  mainClass attribute of <fx:application> refers to application main class <fx:resources> lists all resources needed for the application To give you an example lets assume we need to create a bundle for application consisting of 3 jars:     dist/javamain.jar     dist/lib/somelib.jar    dist/morelibs/anotherlib.jar where javamain.jar has manifest with      Main-Class: app.Main     Class-Path: lib/somelib.jar morelibs/anotherlib.jar Here is sample ant code to package it: <target name="package-bundle"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <fx:deploy nativeBundles="all" width="100" height="100" outdir="native-packages/" outfile="MyJavaApp"> <info title="Sample project" vendor="Me" description="Test built from Java executable jar"/> <fx:application id="myapp" version="1.0" mainClass="app.Main" name="MyJavaApp"/> <fx:resources> <fx:fileset dir="dist"> <include name="javamain.jar"/> <include name="lib/somelib.jar"/> <include name="morelibs/anotherlib.jar"/> </fx:fileset> </fx:resources> </fx:deploy> </target> Option to disable proxy setup in the JavaFX launcher Since JavaFX 2.2 (part of JDK 7u6) properly packaged JavaFX applications  have proxy settings initialized according to Java Runtime configuration settings. This is handy for most of the application accessing network with one exception. If your application explicitly sets networking properties (e.g. socksProxyHost) then they must be set before networking stack is initialized. Proxy detection will initialize networking stack and therefore your custom settings will be ignored. One way to disable proxy setup by the embedded JavaFX launcher is to pass "-Djavafx.autoproxy.disable=true" on the command line. This is good for troubleshooting (proxy detection may cause significant startup time increases if network is misconfigured) but not really user friendly. Now proxy setup will be disabled if manifest of main application jar has "JavaFX-Feature-Proxy" entry with value "None". Here is simple example of adding this entry using <fx:jar> task: <fx:jar destfile="dist/sampleapp.jar"> <fx:application refid="myapp"/> <fx:resources refid="myresources"/> <fileset dir="build/classes"/> <manifest> <attribute name="JavaFX-Feature-Proxy" value="None"/> </manifest> </fx:jar> Ability to specify codebase for WebStart application JavaFX applications do not need to specify codebase (i.e. absolute location where application code will be deployed) for most of real world deployment scenarios. This is convenient as application does not need to be modified when it is moved from development to deployment environment. However, some developers want to ensure copies of their application JNLP file will redirect to master location. This is where codebase is needed. To avoid need to edit JNLP file manually <fx:deploy> task now accepts optional codebase attribute. If attribute is not specified packager will generate same no-codebase files as before. If codebase value is explicitly specified then generated JNLP files (including JNLP content embedded into web page) will use it.  Here is an example: <fx:deploy width="600" height="400" outdir="Samples" codebase="http://localhost/codebaseTest" outfile="TestApp"> .... </fx:deploy> Option to update existing jar file JavaFX packaging procedures are optimized for new application that can use ant or command line javafxpackager utility. This may lead to some redundant steps when you add it to your existing build process. One typical situation is that you might already have a build procedure that produces executable jar file with custom manifest. To properly package it as JavaFX executable jar you would need to unpack it and then use javafxpackager or <fx:jar> to create jar again (and you need to make sure you pass all important details from your custom manifest). We added option to pass jar file as an input to javafxpackager and <fx:jar>. This simplifies integration of JavaFX packaging tools into existing build  process as postprocessing step. By the way, we are looking for ways to simplify this further. Please share your suggestions! On the technical side this works as follows. Both <fx:jar> and javafxpackager will attempt to update existing jar file if this is the only input file. Update process will add JavaFX launcher classes and update the jar manifest with JavaFX attributes. Your custom attributes will be preserved. Update could be performed in place or result may be saved to a different file. Main-Class and Class-Path elements (if present) of manifest of input jar file will be used for JavaFX application  unless they are explicitly overriden in the packaging command you use. E.g. attribute mainClass of <fx:application> (or -appclass in the javafxpackager case) overrides existing Main-Class in the jar manifest. Note that class specified in the Main-Class attribute could either extend JavaFX Application or provide static main() method. Here are examples of updating jar file using javafxpackager: Create new JavaFX executable jar as a copy of given jar file javafxpackager -createjar -srcdir dist -srcfiles fish_proto.jar -outdir dist -outfile fish.jar  Update existing jar file to be JavaFX executable jar and use test.Fish as main application class javafxpackager -createjar -srcdir dist -appclass test.Fish -srcfiles fish.jar -outdir dist -outfile fish.jar  And here is example of using <fx:jar> to create new JavaFX executable jar from the existing fish_proto.jar: <fx:jar destfile="dist/fish.jar"> <fileset dir="dist"> <include name="fish_proto.jar"/> </fileset> </fx:jar> Self-Contained Applications: Specify application icon The only way to specify application icon for self-contained application using tools in JDK 7 update 6 is to use drop-in resources. Now this bug is resolved and you can also specify icon using <fx:icon> tag. Here is an example: <fx:deploy ...> <fx:info> <fx:icon href="default.png"/> </fx:info> ... </fx:deploy> Few things to keep in mind: Only default kind of icon is applicable to self-contained applications (as of now) Icon should follow platform specific rules for sizes and image format (e.g. .ico on Windows and .icns on Mac) Self-Contained Applications: Pass parameters on the command line JavaFX applications support two types of application parameters: named and unnamed (see the API for Application.Parameters). Static named parameters can be added to the application package using <fx:param> and unnamed parameters can be added using <fx:argument>. They are applicable to all execution modes including standalone applications. It is also possible to pass parameters to a JavaFX application from a Web page that hosts it, using <fx:htmlParam>.  Prior to JavaFX 2.2, this was only supported for embedded applications. Starting from JavaFX 2.2, <fx:htmlParam> is applicable to Web Start applications also. See JavaFX deployment guide for more details on this. However, there was no way to pass dynamic parameters to the self-contained application. This has been improved and now native launchers will  delegate parameters from command line to the application code. I.e. to pass parameter to the application you simply need to run it as "myapp.exe somevalue" and then use getParameters().getUnnamed().get(0) to get "somevalue".

    Read the article

  • Calling a running C# application from VBA

    - by Robert
    I have some VBA code that needs to talk to a running c# application. For what it's worth, the c# application runs as a service, and exposes an interface via .net remoting. I posted a question regarding a specific problem I'm having already (http://stackoverflow.com/questions/2556163/from-vb6-to-net-via-com-and-remoting-what-a-mess) but I think I may have my structure all wrong... So I'm taking a step back - what's the best way to go about doing this? One thing that's worth taking into account is that I want to call into the running application - not just call a precompiled DLL...

    Read the article

  • How to Create MySQL Query to Find Related Posts from Multiple Tables?

    - by Robert Samuel White
    This is a complicated situation (for me) that I'm hopeful someone on here can help me with. I've done plenty of searching for a solution and have not been able to locate one. This is essentially my situation... (I've trimmed it down because if someone can help me to create this query I can take it from there.) TABLE articles (article_id, article_title) TABLE articles_tags (row_id, article_id, tag_id) TABLE article_categories (row_id, article_id, category_id) All of the tables have article_id in common. I know what all of the tag_id and category_id rows are. What I want to do is return a list of all the articles that article_tags and article_categories MAY have in common, ordered by the number of common entries. For example: article1 - tags: tag1, tag2, tag3 - categories: cat1, cat2 article2 - tags: tag2 - categories: cat1, cat2 article3 - tags: tag1, tag3 - categories: cat1 So if my article had "tag1" and "cat1 and cat2" it should return the articles in this order: article1 (tag1, cat1 and cat2 in common) article3 (tag1, cat1 in common) article2 (cat1 in common) Any help would genuinely be appreciated! Thank you!

    Read the article

  • Serializing C# objects to DB

    - by Robert Koritnik
    I'm using a DB table with various different entities. This means that I can't have an arbitrary number of fields in it to save all kinds of different entities. I want instead save just the most important fields (dates, reference IDs - kind of foreign key to various other tables, most important text fields etc.) and an additional text field where I'd like to store more complete object data. the most obvious solution would be to use XML strings and store those. The second most obvious choice would be JSON, that usually shorter and probably also faster to serialize/deserialize... And is probably also faster. But is it really? My objects also wouldn't need to be strictly serializable, because JsonSerializer is usually able to serialize anything. Even anonymous objects, that may as well be used here. What would be the most optimal solution to solve this problem?

    Read the article

  • Drawing Directed Acyclic Graphs: Minimizing edge crossing?

    - by Robert Fraser
    Laying out the verticies in a DAG in a tree form (i.e. verticies with no in-edges on top, verticies dependent only on those on the next level, etc.) is rather simple without graph drawing algorithms such as Efficient Sugimiya. However, is there a simple algorithm to do this that minimizes edge crossing? (For some graphs, it may be impossible to completely eliminate edge crossing.) A picture says a thousand words, so is there an algorithm that would suggest: instead of: EDIT: As the picture suggests, a vertex's inputs are always on top and outputs are always below, which is another barrier to just pasting in an existing layout algorithm.

    Read the article

  • Automatic VM deployment

    - by Robert Wilson
    I have an idea to streamline deployments of prototypes within our team using VMs. The idea would be that a developer would be able to deploy their artifacts to Maven, then use a Web interface to pull them onto a development VM for integration/regression testing. They would then be able to to push those artifacts to a reference system, and finally onto production. I'm currently thinking of doing this myself using the vSphere Java API ( http://vijava.sourceforge.net/ ), and some simple scripting to grab artifacts from the Maven repository, configuration from SVN, and then start up a JBoss server. It feels like the kind of thing that may already be available though, has anyone heard of something similar?

    Read the article

  • What is a good Very-High level UI framework for JavaScript?

    - by Robert Gould
    I need to write a temporary Web-based graphical front-end for a custom server system. In this case performance and scalability aren't issues, since at most 10 people may check the system simultaneously. Also it should be PHP or Python (server) & JavaScript (client) (can't use Flex or Silverlight for very specific non-programming related issues). So I know I could use YUI or jQuery, but was wondering if there is something even more high-level that would say allow me to write such a little project within a few hours of work, and get done with it. Basically I want to be as lazy as possible (this is throw-away code anyways) and get the job done in as little time as possible. Any suggestions?

    Read the article

  • How to handle model state errors in ajax-invoked controller action that returns a PartialView

    - by Robert Koritnik
    I have a POST controller action that returns a partial view. Everything seems really easy. but. I load it using $.ajax(), setting type as html. But when my model validation fails I thought I should just throw an error with model state errors. But my reply always returns 500 Server error. How can I report back model state errors without returning Json with whatever result. I would still like to return partial view that I can directly append to some HTML element. Edit I would also like to avoid returning error partial view. This would look like a success on the client. Having the client parse the result to see whether it's an actual success is prone to errors. Designers may change the partial view output and this alone would break the functionality. So I want to throw an exception, but with the correct error message returned to the ajax client.

    Read the article

  • What does Indy's HandleRedirect do?

    - by Robert Frank
    I'm having some trouble reading files with Indy from a site that has WordPress installed. It appears that the site is configured to redirect all hits to sitename/com/wordpress. Can I use HandleRedirect to turn that off so I can read files from the root folder? What is the normal setting for this property? Any downsides to using it for this purpose? (Edit: it appears that my problem may be caused by Windows cacheing of a file I've accessed before through Indy. I'm using fIDHTTP.Request.CacheControl := 'no-cache'; is that adequate?

    Read the article

  • Cannot use scp on Mac OS X

    - by Robert
    Hi all, when I try to copy any file with scp on Mac OS X Snow Leopard from another machine I get this error: scp [email protected]:/home/me/file.zip . Password: ... ---> Couldn't open /dev/null: Permission denied this is the output of "ls -l /dev/null": crw-rw-rw- 1 root wheel 3, 2 May 14 14:10 /dev/null I am in the group wheel, and even if I do "sudo scp..." it doesn't work. It's driving me crazy, do you have any suggestion? Thanx!

    Read the article

  • Enterprise Library Validation Block - Should validation be placed on class or interface?

    - by Robert MacLean
    I am not sure where the best place to put validation (using the Enterprise Library Validation Block) is? Should it be on the class or on the interface? Things that may effect it Validation rules would not be changed in classes which inherit from the interface. Validation rules would not be changed in classes which inherit from the class. Inheritance will occur from the class in most cases - I suspect some fringe cases to inherit from the interface (but I would try and avoid it). The interface main use is for DI which will be done with the Unity block.

    Read the article

  • INSERT 0..n records into table 'A' based on content of table 'B' in MySql 5

    - by Robert Gowland
    Using MySql 5, I have a task where I need to update one table based on the contents of another table. For example, I need to add 'A1' to table 'A' if table 'B' contains 'B1'. I need to add 'A2a' and 'A2b' to table 'A' if table 'B' contains 'B2', etc.. In our case, the value in table 'B' we're interested is an enum. Right now I have a stored procedure containing a series of statements like: INSERT INTO A SELECT 'A1' FROM B WHERE B.Value = 'B1'; --Repeat for 'B2' -> 'A2a'; 'B2' -> 'A2b'; 'B3' -> 'A3', etc... Is there a nicer more DRY way of accomplishing this? Edit: There may be values in table 'B' that have no equivalent value for table 'A'.

    Read the article

  • LDOMs in Solaris 11 don't work

    - by Giovanni
    I just got a new Sun Fire T2000, installed Solaris 11 and was going to configure LDOMs. However "ldmd" can't be started and in turn ldm doesn't work. root@solaris11:~# ldm Failed to connect to logical domain manager: Connection refused root@solaris11:~# svcadm enable svc:/ldoms/ldmd:default root@solaris11:~# tail /var/svc/log/ldoms-ldmd\:default.log [ May 28 12:56:22 Enabled. ] [ May 28 12:56:22 Executing start method ("/opt/SUNWldm/bin/ldmd_start"). ] Disabling service because this domain is not a control domain [ May 28 12:56:22 Method "start" exited with status 0. ] [ May 28 12:56:22 Stopping because service disabled. ] [ May 28 12:56:22 Executing stop method (:kill). ] Why is this not a control domain? There is no other domain on the box (as far as I can tell). I have upgraded the firmware to the latest 6.7.12, booted with reset_nvram, nothing helped ... sc> showhost Sun-Fire-T2000 System Firmware 6.7.12 2011/07/06 20:03 Host flash versions: OBP 4.30.4.d 2011/07/06 14:29 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 What else should I do? Thanks!

    Read the article

  • OpenVPN - Ubuntu 10.04 - Client Can't Connect to Server - Linux Route Add Command Failed

    - by nicorellius
    I suppose this could be asked on Server Fault as well, but it is specific to the client so I thought I'd start here. I have keys for access to an OpenVPN server already in place. I have used these keys to connect already, but using a Windows XP machine. On a Ubuntu machine, I installed OpenVPN and then configured client.conf file so that I could run: sudo openvpn --config client.conf And it seems correct but I still can't connect and get these errors and lines of output: Mon May 31 14:34:57 2010 ERROR: Linux route add command failed: external program exited with error status: 7 Mon May 31 14:34:57 2010 /sbin/route add -net 10.8.0.1 netmask 255.255.255.255 gw 10.8.0.17 SIOCADDRT: File exists Mon May 31 14:34:57 2010 ERROR: Linux route add command failed: external program exited with error status: 7 Mon May 31 14:34:57 2010 Initialization Sequence Completed I searched the net for forums and ideas and tried some file moving and renaming but still ended up in the same place.

    Read the article

  • Java Spotlight Episode 77: Donald Smith on the OpenJDK and Java

    - by Roger Brinkley
    Tweet An interview with Donald Smith about Java and OpenJDK. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Jersey 2.0 Milestone 2 available Oracle distribution of Eclipse (OEPE) now supports GlassFish 3.1.2 Oracle Linux 6 is now part of the certification matrix for 3.1.2 3rd part of Spring -> Java EE 6 article series published Joe Darcy - Repeating annotations in the works JEP 152: Crypto Operations with Network HSMs JEP 153: Launch JavaFX Applications OpenJDK bug database: Status update OpenJDK Governing Board 2012 Election: Results jtreg update March 2012 Take Two: Comparing JVMs on ARM/Linux The OpenJDK group at Oracle is growing App bundler project now open Events April 4-5, JavaOne Japan, Tokyo, Japan April 11, Cleveland JUG, Cleveland, OH April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 17-20, GIDS, Bangalore April 21, Java Summit, Chennai April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India May 5, Bangalore, Pune, ?? - JUG outreach May 7, OTN Developer Day, Mumbai May 8, OTN Developer Day, Delhi Feature InterviewDonald Smith, MBA, MSc, is Director of Product Management for Oracle. He brings worldwide enterprise software experience, ranging from small "dot-com" through Fortune 500 companies. Donald speaks regularly about Java, open source, community development, business models, business integration and software development politics at conferences and events worldwide including Java One, Oracle World, Sun Tech Days, Evans Developer Relations Conference, OOPSLA, JAOO, Server Side Symposium, Colorado Software Summit and others. Prior to returning to Oracle, Donald was Director of Ecosystem Development for the Eclipse Foundation, an independent not-for-profit foundation supporting the Eclipse open source community. Mail Bag What’s Cool OpenJDK 7 port to Haiku JEP 154: Remove Serialization Goto for the Java Programming Language

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >