Search Results

Search found 9035 results on 362 pages for 'common misunderstandings'.

Page 91/362 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • In C/C++ mode in Emacs, change face of code in #if 0...#endif block to comment face

    - by pogopop77
    I'm trying to add functionality found in some other code editors to my Emacs configuration, whereby C/C++ code within #if 0...#endif blocks is automatically set to the comment face/font. Based on my testing, cpp-highlight-mode does something like what I want, but requires user action. It seems like tying into the font-lock functionality is the correct option to make the behavior automatic. I have successfully followed examples in the GNU documentation to change the face of single-line regular expressions. For example: (add-hook 'c-mode-common-hook (lambda () (font-lock-add-keywords nil '(("\\<\\(FIXME\\|TODO\\|HACK\\|fixme\\|todo\\|hack\\)" 1 font-lock-warning-face t))))) works fine to highlight debug related keywords anywhere in a file. However, I am having problems matching #if 0...#endif as a multiline regular expression. I found some useful information in this post (How to compose region like ""), that suggested that Emacs must be told specifically to allow for multiline matches. But this code: (add-hook 'c-mode-common-hook (lambda () '(progn (setq font-lock-multiline t) (font-lock-add-keywords nil '(("#if 0\\(.\\|\n\\)*?#endif" 1 font-lock-comment-face t)))))) still does not work for me. Perhaps my regular expression is wrong (though it appears to work using M-x re-builder), I've messed up my syntax, or I'm following the wrong approach entirely. I'm using Aquamacs 2.1 (which is based on GNU Emacs 23.2.50.1) on OS X 10.6.5, if that makes a difference. Any assistance would be appreciated!

    Read the article

  • Map Literals to Object Properties/Values

    - by vijay
    For eg, my input XML is look like this. <root> <myelemt> <input type="variable"> <variable>STARTDATE</variable> <variable>CUSTOMERNAME</variable> </input> </myelemt> </root> it is deserialized and loaded into the object MyXmlElemtObj in my code i have written like this, if(MyXmlElemtObj.input.variable.ToUpper() == "STARTDATE") ProcessObjectB(ObjectA.OrderDate); if(MyXmlElemtObj.input.variable.ToUpper() == "CUSTOMERNAME") ProcessObjectB(ObjectC.UserName); Here I am mapping those input literals to some objects value. The one thing that scares me is seeing some ** hard-coded literals** all over my code. Instead i would like to write something like ProcessObjectB(Common.GetMappedvalue(MyXmlElemtObj.input.variable)); Is there a way to isolate this mapping thing to common class, where i will predefine which literal is mapped to which values. The problem is the values are of objects created at the run time. If my question is making sense then So how do i achieve this? I think i have given all the necessary details. if anything is missing please metnion. Thx Much.

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • Reference - What does this error mean in PHP?

    - by hakre
    On Stackoverflow you can see a lot of questions popping up about errors. Some users do not even know that error messages exists, others are asking about code that gives an error message but they do not understand the error message. If the error message is common, many questions about the same kind of error appears, but it is hard to find existing Q&A about the topic. Please add "your favorite" error message, one per answer, a short description what it means (even if it is only highlighting terms to their manual page) and a listing of existing Q&A that are of value. This will create a list. The question is a community wiki, so you are not answering for reputation but for creating a reference list for new users. It's based on error messages. Compare with the existing Reference - What does this symbol mean in PHP? question, which works pretty well. What are common errors in PHP and what are their Solutions. Index of Errors Just starting, but there are already some: Warning: Cannot modify header information - headers already sent Warning: mysql_fetch_array() expects parameter 1 to be resource, boolean given in ... on line Parse error: syntax error, unexpected T_XXX in YYY on line ZZZ Fatal Error: Call to a member function ... on a non-object

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • How do I copy files into an existing JAR file with Ant?

    - by Blue
    I have a project that needs to access resources within its own JAR file. When I create the JAR file for the project, I would like to copy a directory into that JAR file (I guess the ZIP equivalent would be "adding" the directory to the existing ZIP file). I only want the copy to happen after the JAR has been created (and I obviously don't want the copy to happen if I clean and delete the JAR file). Currently the build file looks like this: <?xml version="1.0" encoding="UTF-8"?> <project name="foobar" basedir=".." default="jar"> <!-- project-specific properties --> <property name="project.path" value="my/project/dir/foobar" /> <patternset id="project.include"> <include name="${project.path}/**" /> </patternset> <patternset id="project.jar.include"> <include name="${project.path}/**" /> </patternset> <import file="common-tasks.xml" /> <property name="jar.file" location="${test.dir}/foobar.jar" /> <property name="manifest.file" location="misc/foobar.manifest" /> </project> Some of the build tasks are called from another file (common-tasks.xml), which I can't display here. Thanks.

    Read the article

  • Source code dependency manager for C++

    - by 7vies
    There are already some questions about dependency managers here, but it seems to me that they are mostly about build systems, while I am looking for something targeted purely at making dependency tracking and resolution simpler (and I'm not necessarily interested in learning a new build system). So, typically we have a project and some common code with another project. This common code is organized as a library, so when I want to get the latest code version for a project, I should also go get all the libraries from the source control. To do this, I need a list of dependencies. Then, to build the project I can reuse this list too. I've looked at Maven and Ivy, but I'm not sure if they would be appropriate for C++, as they look quite heavily java-targeted (even though there might be plugins for C++, I haven't found people recommending them). I see it as a GUI tool producing some standardized dependency list which can then be parsed by different scripts etc. It would be nice if it could integrate with source control (tag, get a tagged version with dependencies etc), but that's optional. Would you have any suggestions? Maybe I'm just missing something, and usually it's done some other way with no need for such a tool? Thanks.

    Read the article

  • UTF-8 support in java application

    - by jacekn
    I'm having trouble with UTF-8. common.jsp <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> typical.jsp <%@ include file="common.jsp" %> Page Head <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> Form <form id="screenObject" accept-charset="UTF-8" action="/SiteAdmin/articleHeaderEdit?articleId=15" method="post"> I enter non latin1 characters into a text field and click Save. Validator complains about another field and stops the submission. This never gets to the database, so database ability to handle UTF-8 is not in this picture. The page redisplays with appropriate error but the text that had been entered is all messed up. All non latin1 characters are converted to some gibberish. I'm using Spring 3 MVC, in case that matters... Attempts Adding this to my view resolver didn't help: <property name="contentType" value="text/html;charset=UTF-8" />

    Read the article

  • Why would 1.000 subforms in a db be a bad idea?

    - by KlaymenDK
    Warm-up I'm trying to come up with a good way to implement customized document forms. It's for a tool to request access to applications; each application will want to ask its own specific questions. The thing is, we have one kind of (common) user who needs to fill in and submit documents based on templates, and another kind of (super) user who needs to be able to define what each template needs to contain. One implementation option would be to use a form (with the basic mandatory stuff), and have that form dynamically include a subform appropriate to the specific task at hand. The gist of the matter is that we could (=will!) quite easily end up having many hundreds of different subforms! (NB. These subforms will be maintained in an automated manner, but that is another topic that may be considered outside the scope of this Question.) Question It's common knowledge that having a lot of views in a Notes database is Bad Thing. But has anyone tried pushing the number of forms or subforms and made any experiences regarding performance?

    Read the article

  • Deserialization of a DataSet... deal with column name changes? how to migrate data from one column to another?

    - by Brian Kennedy
    So, we wanted to slightly generalize a couple columns in our typed dataset... basically dropped a foreign key constraint and then wanted to change a couple column names to better reflect their new state. All that is easy. The problem is that our users may have serialized out the old version of the DataSet as XML. We want to be able to read those old XML files and deserialize them into the revised DataSet. It seems that would be a fairly common desire... but I haven't yet figured out the right thing to search the internet for. One possible solution would seem to be some way to give a DataColumn an alias or alternate name such that when it reads the old column name, it knows that data can be read into the column with the new column name. I can find no support for any such thing. Another approach would seem to be an "after deserialization" method of some sort... so, I would let it read in the old column values into a normal DataColumn with that name, and then in the "after deserialization" method I would just move the data from the obsolete column into the new column, and then delete the old columns. That would seem to generalize to many other situations... and having such events or hooks is pretty common in ADO.NET. But I have looked for such a hook and haven't yet found it. If no "after deserialization" hook, it would seem I ought to be able to override ReadXml or ReadXmlSerializable methods to call the base and then do my "after" stuff to fix up old data into new. But it does not appear that is possible. Soooo, I have to think backward compatibility with old serialized DataSets and simple data migration would be a well-solved problem... so, trying to reinvent that wheel seems silly. But so far, I haven't seemed to find any documentation on doing those things. Suggestions? What is best practice for this?

    Read the article

  • Multiple Foreign keys to a single table and single key pointing to more than one table

    - by user1216775
    I need some suggestions from the database design experts here. I have around six foreign keys into a single table (defect) which all point to primary key in user table. It is like: defect (.....,assigned_to,created_by,updated_by,closed_by...) If I want to get information about the defect I can make six joins. Do we have any better way to do it? Another one is I have a states table which can store one of the user-defined set of values. I have defect table and task table and I want both of these tables to share the common state table (New, In Progress etc.). So I created: task (.....,state_id,type_id,.....) defect(.....,state_id,type_id,...) state(state_id,state_name,...) importance(imp_id,imp_name,...) There are many such common attributes along with state like importance(normal, urgent etc), priority etc. And for all of them I want to use same table. I am keeping one flag in each of the tables to differentiate task and defect. What is the best solution in such a case? If somebody is using this application in health domain, they would like to assign different types, states, importances for their defect or tasks. Moreover when a user selects any project I want to display all the types,states etc under configuration parameters section.

    Read the article

  • Best way to store this data?

    - by Malfist
    I have just been assigned to renovate an old website, and I get to move it from some old archaic system to drupal. The only problem is that it's a real-estate system and a lot of data is stored. Currently all the information is stored in a single table, an id represents the house and then everything else is key/value pairs. There are a possible 243 keys per estate, there are 23840 estates in the system. As you can imagine the system is slow and difficult to query. I don't think a table with 243 rows would be a very good idea, and probably worse than the current situation. I've done some investigating and here's what I've found out: Missing data does not indicate a 0 value, data is merged from two, unique sources/formats. Some guessing is involved. I have no control over the source of the data. There are 4 keys that are common to all estates, all values look like something that is commonly searched for and could be indexed There are 10 keys that are in the [90-100)% range 8 of these are information like who's selling it, and it's address. The other two seem to belong with the below range There are 80 keys that are in the [80-90)% range This range seems to mostly just list room types and how many the house has (e.g. bedrooms_possible, bathrooms, family_room_3rd, etc) This range also includes some minor information like school districts, one or two more pieces of data on the address. The 179 keys that are in the [0-80)% range include all sorts of miscellaneous information about the estate My best idea was a hybrid approach, create a table that stores important, common information and keep a smaller key/value table. How would you store this information?

    Read the article

  • Conceptual data modeling: Is RDF the right tool? Other solutions?

    - by paprika
    I'm planning a system that combines various data sources and lets users do simple queries on these. A part of the system needs to act as an abstraction layer that knows all connected data sources: the user shouldn't [need to] know about the underlying data "providers". A data provider could be anything: a relational DBMS, a bug tracking system, ..., a weather station. They are hooked up to the query system through a common API that defines how to "offer" data. The type of queries a certain data provider understands is given by its "offer" (e.g. I know these entities, I can give you aggregates of type X for relationship Y, ...). My concern right now is the unification of the data: the various data providers need to agree on a common vocabulary (e.g. the name of the entity "customer" could vary across different systems). Thus, defining a high level representation of the entities and their relationships is required. So far I have the following requirements: I need to be able to define objects and their properties/attributes. Further, arbitrary relations between these objects need to be represented: a verb that defines the nature of the relation (e.g. "knows"), the multiplicity (e.g. 1:n) and the direction/navigability of the relation. It occurs to me that RDF is a viable option, but is it "the right tool" for this job? What other solutions/frameworks do exist for semantic data modeling that have a machine readable representation and why are they better suited for this task? I'm grateful for every opinion and pointer to helpful resources.

    Read the article

  • Ruby On Rails -

    - by Adam S
    I am trying to create a collection_select that is dependent on another collection_select, following the railscasts episode #88.. The database includes a schedule of all cruise ships arrival dates. See attached schema diagram,... (The arrow side of the lines indicate has_many, the non-pointy side indicates belongs_to) In the "booking/new" view I have a collection_select for choosing a cruiseline, then another collection_select will appear for selecting a ship, then another for selecting the date that ship is in. If I ONLY put a collection_select for shipshedule it works fine(because of the direct association with the bookings model). However, if I try to add a collection_select for cruiselines....it breaks(Im assuming because their isnt a direct association). The collection_select for cruiselines returns an undefined method error for "cruiseline_id".....if I simply use "id" the collection_select works, but of course isnt fully functional due to incorrect naming of the form field. I have tried "has_many :shipschedules, :through = :cruiseships" in the cruiseline model and "has_many :bookings, :through = :shipschedules" in the cruiseships model... As you can see in the diagram, I need to access the cruiseships and cruiselines model through the bookings model. I have the models set up so a cruiseline has_many cruiseships, a cruiseship has_many shipschedules, and a shipschedule has_many bookings. But, the bookings model cant directly access the cruiseline model. How do I accomplish this? THANKS!!! http://www.adamstockland.com/common/images/RoRf/PastedGraphic-2.png http://www.adamstockland.com/common/images/RoRf/PastedGraphic-1.png

    Read the article

  • Async run for javascript by using listeners

    - by CharlieShi
    I have two functions, the names are Function3, Function4, Function3 will send request to server side to get jsondata by using ajax, which, however, will take about 3 seconds to complete. Function4 is a common function which will wait for Function3's result and then action. My code puts below: function ajaxRequest(container) { $.ajax({ url: "Home/GetResult", type: "post", success: function (data) { container.append(data.message); } }); } var eventable = { on: function (event, cb) { $(this).on(event, cb); }, trigger: function (event) { $(this).trigger(event); } } var Function3 = { run: function () { var self = this; setTimeout(function () { ajaxRequest($(".container1")); self.trigger('done'); }, 500); } } var Function4 = { run: function () { var self = this; setTimeout(function () { $(".container1").append("Function4 complete"); self.trigger('done'); },500); } } $.extend(Function3, eventable); $.extend(Function4, eventable); Function3.on('done', function (event) { Function4.run(); }); Function4.on('done', function () { $(".container1").append("All done"); }); Function3.run(); but now the problem is, when I start the code , it always show me the result as : first will appear "Function4 complete", then "All done" follows, 3 seconds later, "Function3 complete" will appear. That's out of my expection because my expection is "Function3 complete" comes first, "Function4 complete" comes second and "All done" is expected as the last one. Anyone can help me on this? thx in advice. EDIT: I have included all the functions above now. Also, you can check the js script in JSFIDDER: http://jsfiddle.net/sporto/FYBjc/light/ I have replaced the function in JSFIDDER from a common array push action to ajax request.

    Read the article

  • Quick guide to Oracle IRM 11g: Configuring SSL

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index So far in this guide we have an IRM Server up and running, however I skipped over SSL configuration in the previous article because I wanted to focus in more detail now. You can, if you wish, not bother with setting up SSL, but considering this is a security technology it is worthwhile doing. Contents Setting up a one way, self signed SSL certificate in WebLogic Setting up an official SSL certificate in Apache 2.x Configuring Apache to proxy traffic to the IRM server There are two common scenarios in which an Oracle IRM server is configured. For a development or evaluation system, people usually communicate directly to the WebLogic Server running the IRM service. However in a production environment and for some proof of concept evaluations that require a setup reflecting a production system, the traffic to the IRM server travels via a web server proxy, commonly Apache. In this guide we are building an Oracle Enterprise Linux based IRM service and this article will go over the configuration of SSL in WebLogic and also in Apache. Like in the past articles, we are going to use two host names in the configuration below,irm.company.com will refer to the public Apache server irm.company.internal will refer to the internal WebLogic IRM server Setting up a one way, self signed SSL certificate in WebLogic First lets look at creating just a simple self signed SSL certificate to be used in WebLogic. This is a quick and easy way to get SSL working in your environment, however the downside is that no browsers are going to trust this certificate you create and you'll need to manually install the certificate onto any machine's communicating with the server. This is fine for development or when you have only a few users evaluating the system, but for any significant use it's usually better to have a fully trusted certificate in use and I explain that in the next section. But for now lets go through creating, installing and testing a self signed certificate. We use a library in Java to create the certificates, open a console and running the following commands. Note you should choose your own secure passwords whenever you see password below. [oracle@irm /] source /oracle/middleware/wlserver_10.3/server/bin/setWLSEnv.sh [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irm /] java utils.CertGen -selfsigned -certfile MyOwnSelfCA.cer -keyfile MyOwnSelfKey.key -keyfilepass password -cn "irm.oracle.demo" [oracle@irm /] java utils.ImportPrivateKey -keystore MyOwnIdentityStore.jks -storepass password -keypass password -alias trustself -certfile MyOwnSelfCA.cer.pem -keyfile MyOwnSelfKey.key.pem -keyfilepass password [oracle@irm /] keytool -import -trustcacerts -alias trustself -keystore TrustMyOwnSelf.jks -file MyOwnSelfCA.cer.der -keyalg RSA We now have two Java Key Stores, MyOwnIdentityStore.jks and TrustMyOwnSelf.jks. These contain keys and certificates which we will use in WebLogic Server. Now we need to tell the IRM server to use these stores when setting up SSL connections for incoming requests. Make sure the Admin server is running and login into the WebLogic Console at http://irm.company.intranet:7001/console and do the following; In the menu on the left, select the + next to Environment to expose the submenu, then click on Servers. You will see two servers in the list, AdminServer(admin) and IRM_server1. If the IRM server is running, shut it down either by hitting CONTROL + C in the console window it was started from, or you can switch to the CONTROL tab, select IRM_server1 and then select the Shutdown menu and then Force Shutdown Now. In the Configuration tab select IRM_server1 and switch to the Keystores tab. By default WebLogic Server uses it's own demo identity and trust. We are now going to switch to the self signed one's we've just created. So select the Change button and switch to Custom Identity and Custom Trust and hit save. Now we have to complete the resulting fields, the setting's i've used in my evaluation server are below. IdentityCustom Identity Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/MyOwnIdentityStore.jks Custom Identity Keystore Type: JKS Custom Identity Keystore Passphrase: password Confirm Custom Identity Keystore Passphrase: password TrustCustom Trust Keystore: /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/TrustMyOwnSelf.jks Custom Trust Keystore Type: JKS Custom Trust Keystore Passphrase: password Confirm Custom Trust Keystore Passphrase: password Now click on the SSL tab for the IRM_server1 and enter in the alias and passphrase, in my demo here the details are; IdentityPrivate Key Alias: trustself Private Key Passphrase: password Confirm Private Key Passphrase: password And hit save. Now lets test a connection to the IRM server over HTTPS using SSL. Go back to a console window and start the IRM server, a quick reminder on how to do this is... [oracle@irm /] cd /oracle/middleware/user_projects/domains/irm_domain/bin [oracle@irm /] ./startManagedWeblogic IRM_server1 Once running, open a browser and head to the SSL port of the server. By default the IRM server will be listening on the URL https://irm.company.intranet:16101/irm_rights. Note in the example image on the right the port is 7002 because it's a system that has the IRM services installed on the Admin server, this isn't typical (or advisable). Your system is going to have a separate managed server which will be listening on port 16101. Once you open this address you will notice that your browser is going to complain that the server certificate is untrusted. The images on the right show how Firefox displays this error. You are going to be prompted every time you create a new SSL session with the server, both from the browser and more annoyingly from the IRM Desktop. If you plan on always using a self signed certificate, it is worth adding it to the Windows certificate store so that when you are accessing sealed content you do not keep being informed this certificate is not trusted. Follow these instructions (which are for Internet Explorer 8, they may vary for your version of IE.) Start Internet Explorer and open the URL to your IRM server over SSL, e.g. https://irm.company.intranet:16101/irm_rights. IE will complain that about the certificate, click on Continue to this website (not recommended). From the IE Tools menu select Internet Options and from the resulting dialog select Security and then click on Trusted Sites and then the Sites button. Add to the list of trusted sites a URL which mates the server you are accessing, e.g. https://irm.company.intranet/ and select OK. Now refresh the page you were accessing and next to the URL you should see a red cross and the words Certificate Error. Click on this button and select View Certificates. You will now see a dialog with the details of the self signed certificate and the Install Certificate... button should be enabled. Click on this to start the wizard. Click next and you'll be asked where you should install the certificate. Change the option to Place all certificates in the following store. Select browse and choose the Trusted Root Certification Authorities location and hit OK. You'll then be prompted to install the certificate and answer yes. You also need to import the root signed certificate into the same location, so once again select the red Certificate Error option and this time when viewing the certificate, switch to the Certification Path tab and you should see a CertGenCAB certificate. Select this and then click on View Certificate and go through the same process as above to import the certificate into the store. Finally close all instances of the IE browser and re-access the IRM server URL again, this time you should not receive any errors. Setting up an official SSL certificate in Apache 2.x At this point we now have an IRM server that you can communicate with over SSL. However this certificate isn't trusted by any browser because it's path of trust doesn't end in a recognized certificate authority (CA). Also you are communicating directly to the WebLogic Server over a non standard SSL port, 16101. In a production environment it is common to have another device handle the initial public internet traffic and then proxy this to the WebLogic server. The diagram below shows a very simplified view of this type of deployment. What i'm going to walk through next is configuring Apache to proxy traffic to a WebLogic server and also to use a real SSL certificate from an official CA. First step is to configure Apache to handle incoming requests over SSL. In this guide I am configuring the IRM service in Oracle Enterprise Linux 5 update 3 and Apache 2.2.3 which came with OpenSSL and mod_ssl components. Before I purchase an SSL certificate, I need to generate a certificate request from the server. Oracle.com uses Verisign and for my own personal needs I use cheaper certificates from GoDaddy. The following instructions are specific to Apache, but there are many references out there for other web servers. For Apache I have OpenSSL and the commands are; [oracle@irm /] cd /usr/bin [oracle@irm bin] openssl genrsa -des3 -out irm-apache-server.key 2048 Generating RSA private key, 2048 bit long modulus ............................+++ .........+++ e is 65537 (0x10001) Enter pass phrase for irm-apache-server.key: Verifying - Enter pass phrase for irm-apache-server.key: [oracle@irm bin] openssl req -new -key irm-apache-server.key -out irm-apache-server.csr Enter pass phrase for irm-apache-server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]:US State or Province Name (full name) [Berkshire]:CA Locality Name (eg, city) [Newbury]:San Francisco Organization Name (eg, company) [My Company Ltd]:Oracle Organizational Unit Name (eg, section) []:Security Common Name (eg, your name or your server's hostname) []:irm.company.com Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:testing An optional company name []: You must make sure to remember the pass phrase you used in the initial key generation, you will need this when later configuring Apache. In the /usr/bin directory there are now two new files. The irm-apache-server.csr contains our certificate request and is what you cut and paste, or upload, to your certificate authority when you purchase and validate your SSL certificate. In response you will typically get two files. Your server certificate and another certificate file that will likely contain a set of certificates from your CA which validate your certificate's trust. Next we need to configure Apache to use these files. Typically there is an ssl.conf file which is where all the SSL configuration is done. On my Oracle Enterprise Linux server this file is located in /etc/httpd/conf.d/ssl.conf and i've added the following lines. <VirtualHost irm.company.com> # Setup SSL for irm.company.com ServerName irm.company.com SSLEngine On SSLCertificateFile /oracle/secure/irm.company.com.crt SSLCertificateKeyFile /oracle/secure/irm.company.com.key SSLCertificateChainFile /oracle/secure/gd_bundle.crt </VirtualHost> Restarting Apache (apachectl restart) and I can now attempt to connect to the Apache server in a web browser, https://irm.company.com/. If all is configured correctly I should now see an Apache test page delivered to me over HTTPS. Configuring Apache to proxy traffic to the IRM server Final piece in setting up SSL is to have Apache proxy requests for the IRM server but do so securely. So the requests to Apache will be over HTTPS using a legitimate certificate, but we can also configure Apache to proxy these requests internally across to the IRM server using SSL with the self signed certificate we generated at the start of this article. To do this proxying we use the WebLogic Web Server plugin for Apache which you can download here from Oracle. Download the zip file and extract onto the server. The file extraction reveals a set of zip files, each one specific to a supported web server. In my instance I am using Apache 2.2 32bit on an Oracle Enterprise Linux, 64 bit server. If you are not sure what version your Apache server is, run the command /usr/sbin/httpd -V and you'll see version and it its 32 or 64 bit. Mine is a 32bit server so I need to extract the file WLSPlugin1.1-Apache2.2-linux32-x86.zip. The from the resulting lib folder copy the file mod_wl.so into /usr/lib/httpd/modules/. First we want to test that the plug in will work for regular HTTP traffic. Edit the httpd.conf for Apache and add the following section at the bottom. LoadModule weblogic_module modules/mod_wl.so <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16100    WLLogFile /tmp/wl-proxy.log </IfModule> <Location /irm_rights>    SetHandler weblogic-handler </Location> <Location /irm_desktop>    SetHandler weblogic-handler </Location> <Location /irm_sealing>    SetHandler weblogic-handler </Location> <Location /irm_services>    SetHandler weblogic-handler </Location> Now restart Apache again (apachectl restart) and now open a browser to http://irm.company.com/irm_rights. Apache will proxy the HTTP traffic from the port 80 of your Apache server to the IRM service listening on port 16100 of the WebLogic Managed server. Note above I have included all four of the Locations you might wish to proxy. http://irm.company.internalirm_rights is the URL to the management website, /irm_desktop is the URL used for the IRM Desktop to communicate. irm_sealing is for web services based document sealing and irm_services is for IRM server web services. The last two are typically only used when you have the IRM server integrated with another application and it is unlikely you'd be accessing these resources from the public facing Apache server. However, just in case, i've mentioned them above. Now let's enable SSL communication from Apache to WebLogic. In the ZIP file we extracted were some more modules we need to copy into the Apache folder. Looking back in the lib that we extracted, there are some more files. Copy the following into the /usr/lib/httpd/modules/ folder. libwlssl.so libnnz11.so libclntsh.so.11.1 Now the documentation states that should only need to do this, but I found that I also needed to create an environment variable called LD_LIBRARY_PATH and point this to the folder /usr/lib/httpd/modules/. If I didn't do this, starting Apache with the WebLogic module configured to SSL would throw the error. [crit] (20014)Internal error: WL SSL Init failed for server: (null) on 0 So I had to edit the file /etc/profile and add the following lines at the bottom. You may already have the LD_LIBRARY_PATH variable defined, therefore simply add this path to it. LD_LIBRARY_PATH=/usr/lib/httpd/modules/ export LD_LIBRARY_PATH Now the WebLogic plug in uses an Oracle Wallet to store the required certificates.You'll need to copy the self signed certificate from the IRM server over to the Apache server. Copy over the MyOwnSelfCA.cer.der into the same folder where you are storing your public certificates, in my example this is /oracle/secure. It's worth mentioning these files should ONLY be readable by root (the user Apache runs as). Now lets create an Oracle Wallet and import the self signed certificate from the IRM server. The file orapki was included in the bin folder of the Apache 1.1 plugin zip you extracted. orapki wallet create -wallet /oracle/secure/my-wallet -auto_login_only orapki wallet add -wallet /oracle/secure/my-wallet -trusted_cert -cert MyOwnSelfCA.cer.der -auto_login_only Finally change the httpd.conf to reflect that we want the WebLogic Apache plug-in to use HTTPS/SSL and not just plain HTTP. <IfModule mod_weblogic.c>    WebLogicHost irm.company.internal    WebLogicPort 16101    SecureProxy ON    WLSSLWallet /oracle/secure/my-wallet    WLLogFile /tmp/wl-proxy.log </IfModule> Then restart Apache once more and you can go back to the browser to test the communication. Opening the URL https://irm.company.com/irm_rights will proxy your request to the WebLogic server at https://irm.company.internal:16101/irm_rights. At this point you have a fully functional Oracle IRM service, the next step is to create a sealed document and test the entire system.

    Read the article

  • Getting Started Building Windows 8 Store Apps with XAML/C#

    - by dwahlin
    Technology is fun isn’t it? As soon as you think you’ve figured out where things are heading a new technology comes onto the scene, changes things up, and offers new opportunities. One of the new technologies I’ve been spending quite a bit of time with lately is Windows 8 store applications. I posted my thoughts about Windows 8 during the BUILD conference in 2011 and still feel excited about the opportunity there. Time will tell how well it ends up being accepted by consumers but I’m hopeful that it’ll take off. I currently have two Windows 8 store application concepts I’m working on with one being built in XAML/C# and another in HTML/JavaScript. I really like that Microsoft supports both options since it caters to a variety of developers and makes it easy to get started regardless if you’re a desktop developer or Web developer. Here’s a quick look at how the technologies are organized in Windows 8: In this post I’ll focus on the basics of Windows 8 store XAML/C# apps by looking at features, files, and code provided by Visual Studio projects. To get started building these types of apps you’ll definitely need to have some knowledge of XAML and C#. Let’s get started by looking at the Windows 8 store project types available in Visual Studio 2012.   Windows 8 Store XAML/C# Project Types When you open Visual Studio 2012 you’ll see a new entry under C# named Windows Store. It includes 6 different project types as shown next.   The Blank App project provides initial starter code and a single page whereas the Grid App and Split App templates provide quite a bit more code as well as multiple pages for your application. The other projects available can be be used to create a class library project that runs in Windows 8 store apps, a WinRT component such as a custom control, and a unit test library project respectively. If you’re building an application that displays data in groups using the “tile” concept then the Grid App or Split App project templates are a good place to start. An example of the initial screens generated by each project is shown next: Grid App Split View App   When a user clicks a tile in a Grid App they can view details about the tile data. With a Split View app groups/categories are shown and when the user clicks on a group they can see a list of all the different items and then drill-down into them:   For the remainder of this post I’ll focus on functionality provided by the Blank App project since it provides a simple way to get started learning the fundamentals of building Windows 8 store apps.   Blank App Project Walkthrough The Blank App project is a great place to start since it’s simple and lets you focus on the basics. In this post I’ll focus on what it provides you out of the box and cover additional details in future posts. Once you have the basics down you can move to the other project types if you need the functionality they provide. The Blank App project template does exactly what it says – you get an empty project with a few starter files added to help get you going. This is a good option if you’ll be building an app that doesn’t fit into the grid layout view that you see a lot of Windows 8 store apps following (such as on the Windows 8 start screen). I ended up starting with the Blank App project template for the app I’m currently working on since I’m not displaying data/image tiles (something the Grid App project does well) or drilling down into lists of data (functionality that the Split App project provides). The Blank App project provides images for the tiles and splash screen (you’ll definitely want to change these), a StandardStyles.xaml resource dictionary that includes a lot of helpful styles such as buttons for the AppBar (a special type of menu in Windows 8 store apps), an App.xaml file, and the app’s main page which is named MainPage.xaml. It also adds a Package.appxmanifest that is used to define functionality that your app requires, app information used in the store, plus more. The App.xaml, App.xaml.cs and StandardStyles.xaml Files The App.xaml file handles loading a resource dictionary named StandardStyles.xaml which has several key styles used throughout the application: <Application x:Class="BlankApp.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:BlankApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- Styles that define common aspects of the platform look and feel Required by Visual Studio project and item templates --> <ResourceDictionary Source="Common/StandardStyles.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application>   StandardStyles.xaml has style definitions for different text styles and AppBar buttons. If you scroll down toward the middle of the file you’ll see that many AppBar button styles are included such as one for an edit icon. Button styles like this can be used to quickly and easily add icons/buttons into your application without having to be an expert in design. <Style x:Key="EditAppBarButtonStyle" TargetType="ButtonBase" BasedOn="{StaticResource AppBarButtonStyle}"> <Setter Property="AutomationProperties.AutomationId" Value="EditAppBarButton"/> <Setter Property="AutomationProperties.Name" Value="Edit"/> <Setter Property="Content" Value="&#xE104;"/> </Style> Switching over to App.xaml.cs, it includes some code to help get you started. An OnLaunched() method is added to handle creating a Frame that child pages such as MainPage.xaml can be loaded into. The Frame has the same overall purpose as the one found in WPF and Silverlight applications - it’s used to navigate between pages in an application. /// <summary> /// Invoked when the application is launched normally by the end user. Other entry points /// will be used when the application is launched to open a specific file, to display /// search results, and so forth. /// </summary> /// <param name="args">Details about the launch request and process.</param> protected override void OnLaunched(LaunchActivatedEventArgs args) { Frame rootFrame = Window.Current.Content as Frame; // Do not repeat app initialization when the Window already has content, // just ensure that the window is active if (rootFrame == null) { // Create a Frame to act as the navigation context and navigate to the first page rootFrame = new Frame(); if (args.PreviousExecutionState == ApplicationExecutionState.Terminated) { //TODO: Load state from previously suspended application } // Place the frame in the current Window Window.Current.Content = rootFrame; } if (rootFrame.Content == null) { // When the navigation stack isn't restored navigate to the first page, // configuring the new page by passing required information as a navigation // parameter if (!rootFrame.Navigate(typeof(MainPage), args.Arguments)) { throw new Exception("Failed to create initial page"); } } // Ensure the current window is active Window.Current.Activate(); }   Notice that in addition to creating a Frame the code also checks to see if the app was previously terminated so that you can load any state/data that the user may need when the app is launched again. If you’re new to the lifecycle of Windows 8 store apps the following image shows how an app can be running, suspended, and terminated.   If the user switches from an app they’re running the app will be suspended in memory. The app may stay suspended or may be terminated depending on how much memory the OS thinks it needs so it’s important to save state in case the application is ultimately terminated and has to be started fresh. Although I won’t cover saving application state here, additional information can be found at http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465099.aspx. Another method in App.xaml.cs named OnSuspending() is also included in App.xaml.cs that can be used to store state as the user switches to another application:   /// <summary> /// Invoked when application execution is being suspended. Application state is saved /// without knowing whether the application will be terminated or resumed with the contents /// of memory still intact. /// </summary> /// <param name="sender">The source of the suspend request.</param> /// <param name="e">Details about the suspend request.</param> private void OnSuspending(object sender, SuspendingEventArgs e) { var deferral = e.SuspendingOperation.GetDeferral(); //TODO: Save application state and stop any background activity deferral.Complete(); } The MainPage.xaml and MainPage.xaml.cs Files The Blank App project adds a file named MainPage.xaml that acts as the initial screen for the application. It doesn’t include anything aside from an empty <Grid> XAML element in it. The code-behind class named MainPage.xaml.cs includes a constructor as well as a method named OnNavigatedTo() that is called once the page is displayed in the frame.   /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } /// <summary> /// Invoked when this page is about to be displayed in a Frame. /// </summary> /// <param name="e">Event data that describes how this page was reached. The Parameter /// property is typically used to configure the page.</param> protected override void OnNavigatedTo(NavigationEventArgs e) { } }   If you’re experienced with XAML you can switch to Design mode and start dragging and dropping XAML controls from the ToolBox in Visual Studio. If you prefer to type XAML you can do that as well in the XAML editor or while in split mode. Many of the controls available in WPF and Silverlight are included such as Canvas, Grid, StackPanel, and Border for layout. Standard input controls are also included such as TextBox, CheckBox, PasswordBox, RadioButton, ComboBox, ListBox, and more. MediaElement is available for rendering video or playing audio files. Some of the “common” XAML controls included out of the box are shown next:   Although XAML/C# Windows 8 store apps don’t include all of the functionality available in Silverlight 5, the core functionality required to build store apps is there with additional functionality available in open source projects such as Callisto (started by Microsoft’s Tim Heuer), Q42.WinRT, and others. Standard XAML data binding can be used to bind C# objects to controls, converters can be used to manipulate data during the data binding process, and custom styles and templates can be applied to controls to modify them. Although Visual Studio 2012 doesn’t support visually creating styles or templates, Expression Blend 5 handles that very well. To get started building the initial screen of a Windows 8 app you can start adding controls as mentioned earlier. Simply place them inside of the <Grid> element that’s included. You can arrange controls in a stacked manner using the StackPanel control, add a border around controls using the Border control, arrange controls in columns and rows using the Grid control, or absolutely position controls using the Canvas control. One of the controls that may be new to you is the AppBar. It can be used to add menu/toolbar functionality into a store app and keep the app clean and focused. You can place an AppBar at the top or bottom of the screen. A user on a touch device can swipe up to display the bottom AppBar or right-click when using a mouse. An example of defining an AppBar that contains an Edit button is shown next. The EditAppBarButtonStyle is available in the StandardStyles.xaml file mentioned earlier. <Page.BottomAppBar> <AppBar x:Name="ApplicationAppBar" Padding="10,0,10,0" AutomationProperties.Name="Bottom App Bar"> <Grid> <StackPanel x:Name="RightPanel" Orientation="Horizontal" Grid.Column="1" HorizontalAlignment="Right"> <Button x:Name="Edit" Style="{StaticResource EditAppBarButtonStyle}" Tag="Edit" /> </StackPanel> </Grid> </AppBar> </Page.BottomAppBar> Like standard XAML controls, the <Button> control in the AppBar can be wired to an event handler method in the MainPage.Xaml.cs file or even bound to a ViewModel object using “commanding” if your app follows the Model-View-ViewModel (MVVM) pattern (check out the MVVM Light package available through NuGet if you’re using MVVM with Windows 8 store apps). The AppBar can be used to navigate to different screens, show and hide controls, display dialogs, show settings screens, and more.   The Package.appxmanifest File The Package.appxmanifest file contains configuration details about your Windows 8 store app. By double-clicking it in Visual Studio you can define the splash screen image, small and wide logo images used for tiles on the start screen, orientation information, and more. You can also define what capabilities the app has such as if it uses the Internet, supports geolocation functionality, requires a microphone or webcam, etc. App declarations such as background processes, file picker functionality, and sharing can also be defined Finally, information about how the app is packaged for deployment to the store can also be defined. Summary If you already have some experience working with XAML technologies you’ll find that getting started building Windows 8 applications is pretty straightforward. Many of the controls available in Silverlight and WPF are available making it easy to get started without having to relearn a lot of new technologies. In the next post in this series I’ll discuss additional features that can be used in your Windows 8 store apps.

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • CodePlex Daily Summary for Saturday, April 14, 2012

    CodePlex Daily Summary for Saturday, April 14, 2012Popular ReleasesJson.NET: Json.NET 4.5 Release 3: Change - DefaultContractResolver.IgnoreSerializableAttribute is now true by default Fix - Fixed MaxDepth on JsonReader recursively throwing an error Fix - Fixed SerializationBinder.BindToName not being called with full assembly namesVisual Studio Team Foundation Server Branching and Merging Guide: v2 - For Visual Studio 11: Welcome to the BETA of the Branching and Merging Guide preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation has been reviewed by the quality and recording te...Media Companion: MC 3.435b Release: This release should be the last beta for 3.4xx. A handful of problems have been sorted out since last weeks release. If there are no major problems this time, it will upgraded to 3.500 Stable at the end of the week! General The .NET Framework has been modified to use the Client profile, as provided by normal Windows updates; no longer is there a requirement to download and install the Full profile! mc_com.exe has been worked on to mimic proper Media Companion output (a big thanks to vbat99...THE NVL Maker: The NVL Maker Ver 3.12: SIM??????,TRA??????,ZIP????。 ????????????????,??????~(??????????????????) ??????? simpatch1440x900 trapatch1440x900 ?????1400x900??1440x900,?????????????Data.xp3。 ???? ?????3.12?EXE????????????????, ??????????????,??Tool/krkrconf.exe,??Editor.exe, ???????????????「??????」。 ?????Editor.exe??????。 ???? ???? http://etale.us/gameupload/THE_NVL_Maker_ver3.12_sim.zip ???? http://www.mediafire.com/?je51683g22bz8vo ??Infinite Creation?? http://bbs.etale.us/forum.php ?????? ???? 3.12 ??? ???、????...SnmpMessenger: 0.1.1.1: Project Description SnmpMessenger, a messenger. Using the SNMP protocol to exchange messages. It's developed in C#. SnmpMessenger For .Net 4.0, Mono 2.8. Support SNMP V1, V2, V3. Features Send get, set and other requests and get the response. Send and receive traps. Handle requests and return the response. Note This library is compliant with the Common Language Specification(CLS). The latest version is 0.1.1.1. It is only a messenger, does not involve VACM. Any problems, Please mailto: wa...Python Tools for Visual Studio: 1.1.1: We’re pleased to announce the release of Python Tools for Visual Studio 1.1.1. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython and IronPython • Python editor with advanced member and signature intellisense • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging • Profiling with multiple view...Supporting Guidance and Whitepapers: v1 - Team Foundation Service Whitepapers: Welcome to the BETA release of the Team Foundation Service Whitepapers preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review All critical bugs have been resolved Known Issue...Microsoft .NET Gadgeteer: .NET Gadgeteer Core 2.42.550 (BETA): Microsoft .NET Gadgeteer Core RELEASE NOTES Version 2.42.550 11 April 2012 BETA VERSION WARNING: This is a beta version! Please note: - API changes may be made before the next version (2.42.600) - The designer will not show modules/mainboards for NETMF 4.2 until you get upgraded libraries from the module/mainboard vendors - Install NETMF 4.2 (see link below) to use the new features of this release That warning aside, this version should continue to sup...LINQ to Twitter: LINQ to Twitter Beta v2.0.24: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, and Client Profile. 100% Twitter API coverage. Also available via NuGet.Kendo UI ASP.NET Sample Applications: Sample Applications (2012-04-11): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.SCCM Client Actions Tool: SCCM Client Actions Tool v1.12: SCCM Client Actions Tool v1.12 is the latest version. It comes with following changes since last version: Improved WMI date conversion to be aware of timezone differences and DST. Fixed new version check. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively availab...Dual Browsing: Dual Browser: Please note the following: I setup the address bar temporarily to only accepts http:// .com addresses. Just type in the name of the website excluding: http://, www., and .com; (Ex: for www.youtube.com just type: youtube then click OK). The page splitter can be grabbed by holding down your left mouse button and move left or right. By right clicking on the page background, you can choose to refresh, go back a page and so on. Demo video: http://youtu.be/L7NTFVM3JUYCslaGenFork: Rules sample v.1.1.0: On projects for CSLA v.4.2.2, added 5 new Business Rules: - DependencyFrom - RequiredWhenCanWrite - RequiredWhenIsNotNew - RequiredWhenNew - StopIfNotFieldExists Added new projects for CSLA v.4.3.10 with 6 new Business Rules: - DependencyFrom - FieldExists - RequiredWhenCanWrite - RequiredWhenIsNotNew - RequiredWhenNew - StopIfNotFieldExists Following CSLA convention, SL stands for Silverligth 5 and SL4 stands for Silverlight 4. NOTE - Although the projects for CSLA v.4.1.0 still exist, thi...Multiwfn: Multiwfn 2.3.3: Multiwfn 2.3.3Liberty: v3.2.0.1 Release 9th April 2012: Change Log-Fixed -Reach Fixed a bug where the object editor did not work on non-English operating systemsCommonData - Common Functions for ASP.NET projects: CommonData 0.3L: Common Data has been updated to the latest NUnit (2.6.0) The demo project has been updated with an example on how to correctly compare a floating point value.ASP.Net MVC Dynamic JS/CSS Script Compression Framework: Initial Stable: Initial Stable Version Contains Source for Compression Library and example for usage in web application.Path Copy Copy: 10.1: This release addresses the following work items: 11357 11358 11359 This release is a recommended upgrade, especially for users who didn't install the 10.0.1 version.ExtAspNet: ExtAspNet v3.1.3: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-08 v3.1.3 -??Language="zh_TW"?JS???BUG(??)。 +?D...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.5.5: New Controls ChatBubble ChatBubbleTextBox OpacityToggleButton New Stuff TimeSpan languages added: RU, SK, CS Expose the physics math from TimeSpanPicker Image Stretch now on buttons Bug Fixes Layout fix so RoundToggleButton and RoundButton are exactly the same Fix for ColorPicker when set via code behind ToastPrompt bug fix with OnNavigatedTo Toast now adjusts its layout if the SIP is up Fixed some issues with Expression Blend supportNew ProjectsArkadia Operating System: This operating system is based on Cosmos Project and C# Programming Langage.Copy Microsoft Online User Attributes from one domain user to another: CopyMSOLAttributes copies MSOL-specific user attributes from source user to target user (legacyExchangeDN, mail, msExchMailboxGuid, proxyAddresses, targetAddress). This assists in migrating to Office 365 in a multi-domain, multi-forest environment.DevChat: DevChat is a small secure chat for dev groups or teams inside of an organisation that wish to control their information.Dynamic UI Framework: Dynamic UI FrameworkEjemploAndroid: Prueba Trabajar con eclipse y codeplexElfos vs Orcos: videojuego xna elfos vs orcosFacebook Suite Rules Orchard module: Part of the Facebook Suite Orchard module that provides various rules for the Rules engine to interact with Facebook.FakeMail: FakeMail makes testing of email enabled applications easier for DEVs and QAs. You no longer need to have multiple "real" email addresses to test and validate registration and notification features. Written in C# this ASP.Net MVC application uses RavenDB Embedded as a document store and hosts a custom SMTP server. Configure FakeMail and update your existing email-enabled application's SMTP server settings are you are ready to go. FakeMail mail server will accept email sent to ANY addre...FIM Object Visualizer: The FIM Object Visualizer is a tool to display and document configurable FIM objects such as Synchronization Rules, Workflows and Management Policy Rules. FXIB: ThisGoogle Places Autocomplete API for WP7: The Google Places Autocomplete API for WP7 is a project for WP7 developers to use when implementing an autocomplete box in their application. This project provides an easy non-blocking way to get fast results from the Google Places Autocomplete API. hook send/recv function with CreateRemoteThread: this sample is hooking send/recv function with CreateRemoteThread api.Infinity Music: Entertainment center, music player, youtube player, Internet radio, Facebook, Twitter ... All in one application ...! Centro de entretenimiento, reproductor de música, reproductor videos you tube, radio por internet, Facebook, Twitter... Todo en una misma aplicación...! Centre de divertissement, lecteur de musique, lecteur YouTube, la radio sur Internet, Facebook, Twitter ... Tout dans une seule application ...!Library Guard: Library Guard helps you maintain your media library(primarily audio) by correcting tags, maintaining location of your files, etc.MVVM Source Control Monitor: An exercise in MVVM with Wpf to create a useful and unobtrusive source control notification tool that lives in the system tray, and can also be viewed in a window. This is meant to provide a 'real world' application to provide examples of MVVM implementation without understanding any other frameworks that can blur the lines about what MVVM really is (it's a pattern, folks). The application will use as little 3rd party code as possible (Wpf Toolkit, some other goodies) that are all unrel...Orchard Calendar: Module provides calendar capabilities in Orchard. This is accomplished by a new calendar content part and content type along with new calendar layout for Projector module. PaidRanks: PaidRanks makes it easier for minecraft admins to mangae user rights and rewards donators. You'll no longer have to manually change nick names again. It's developed in Java 7.Perritos Project: Practice of the subject projectPython Pygame Sprites Example: This project example uses Python and Pygame to create a game environment. This code requires major refactoring and there is no warranty. Reversi.NET: Reversi is a board game involving abstract strategy and played by two players on a board with 8 rows and 8 columns and a set of distinct pieces for each side.SharePoint Social Tag Counters: The SharePoint Social Tag Counter project takes the social features from SharePoint 2010 to another level. It allows you to immediately show your SharePoint visitors how popular the content is with the help of Social "I like it" and "Tag" counters.Shop systems1: dddddddddddSource Block - Data Access Components: Source Block - Data Access Components Contains two components. 1. DBHandler 2. DBSchemaHandler DBHandler : Pure ADO.Net based Data access layer DBSchemaHandler : Pure ADO.Net based Database schema handlerSource Block - Domain Driven Development Framework: A framework which promotes domain pattern based development. This promotes patterns such as : Repository, Unit of work, Dependency injection and Inversion of control. Built on MS E.F 4.2 using code first approach. Includes features such as code generationSuperSocket Proxy Server: A .NET proxy server based on SuperSocketTestC-Q: This project is on working with KDB database and fetching data from open market data providers. This will do rigorous analysis. Don't try to find any code till Oct 2012.tofinish: tofinishUse National Geographic Photo of the Day as Wallpaper: I used to watch National Geographic Photo of the day everyday. http://photography.nationalgeographic.com/photography/photo-of-the-day So I thought it might be helpful for people like me if a software synchronise the wallpaper with national geographic photo of the day. It also archives photo of the day along with publication date and photo title.VAMP: Projet industriel MBDS 2012?????: ???????Model???????: ????:??Model??????? ???:??? ??:?????????python?SDK: ??python??????SDK。????

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • CodePlex Daily Summary for Wednesday, May 23, 2012

    CodePlex Daily Summary for Wednesday, May 23, 2012Popular ReleasesChristoc's DotNetNuke Module Development Template: 00.00.08 for DNN6: BEFORE USE YOU need to install the MSBuild Community Tasks available from http://msbuildtasks.tigris.org For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual Studio 2010\Templates\ProjectTemplates\Visual C#\Web OR for VB My Documents\Visual Studio 2010\Te...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.53: fix issue #18106, where member operators on numeric literals caused the member part to be duplicated when not minifying numeric literals ADD NEW FEATURE: ability to create source map files! The first mapfile format to be supported is the Script# format. Use the new -map filename switch to create map files when building your sources.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 Analysis Services Project: AdventureWorks 2008 Analysis Services sample database project files. Project files for Analysis Services 2008 sample cubes (standard and enterprise). Lessons 1 to 10 tutorial files.HigLabo: HigLabo_20120522: Bug fix of EncodeToMailHeaderLine method when TransferEncoding is QuotedPrintable and Encoding is not ASCII.BlackJumboDog: Ver5.6.3: 2012.05.22 Ver5.6.3  (1) HTTP????????、ftp://??????????????????????LogicCircuit: LogicCircuit 2.12.5.22: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing start up issue.Internals Viewer (updated) for SQL Server 2008 R2.: Internals Viewer for SSMS 2008 R2: Updated code to work with SSMS 2008 R2. Changed dependancies, removing old assemblies no longer present and replacing them with updated versions.Orchard Project: Orchard 1.4.2: This is a service release to address 1.4 and 1.4.1 bugs. Please read our release notes for Orchard 1.4.2: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesVirtu: Virtu 0.9.2: Source Requirements.NET Framework 4 Visual Studio 2010 with SP1 or Visual Studio 2010 Express with SP1 Silverlight 5 Tools for Visual Studio 2010 with SP1 Windows Phone 7 Developer Tools (which includes XNA Game Studio 4) Binaries RequirementsSilverlight 5 .NET Framework 4 XNA Framework 4SharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.0): New fetures:View other users predictions Hide/Show background image (web part property) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...Metadata Document Generator for Microsoft Dynamics CRM 2011: Metadata Document Generator (2.0.0.0): New UI Metro style New features Save and load settings to/from file Export only OptionSet attributes Use of Gembox Spreadsheet to generate Excel (makes application lighter : 1,5MB instead of 7MB)Audio Pitch & Shift: Audio Pitch And Shift 4.2.0: Backward / Forward buttons Improved features for encoding, streaming, menu Bug fixesSilverlight socket component: Smark.NetDisk: Smark.NetDisk?????Silverlight ?.net???????????,???????????????????????。Smark.NetDisk??????????,????.net???????????????????????tcp??;???????Silverlight??????????????????????callisto: callisto 2.0.28: Update log: - Extended Scribble protocol. - Updated HTML5 client code - now supports the latest versions of Google Chrome.ExtAspNet: ExtAspNet v3.1.6: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://bbs.extasp.net/ ??:http://demo.extasp.net/ ??:http://doc.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-20 v3.1.6 -??RowD...Dynamics XRM Tools: Dynamics XRM Tools BETA 1.0: The Dynamics XRM Tools 1.0 BETA is now available Seperate downloads are available for On Premise and Online as certain features are only available On Premise. This is a BETA build and may not resemble the final release. Many enhancements are in development and will be made available soon. Please provide feedback so that we may learn and discover how to make these tools better.PHPExcel: PHPExcel 1.7.7: See Change Log for details of the new features and bugfixes included in this release. BREAKING CHANGE! From PHPExcel 1.7.8 onwards, the 3rd-party tcPDF library will no longer be bundled with PHPExcel for rendering PDF files through the PDF Writer. The PDF Writer is being rewritten to allow a choice of 3rd party PDF libraries (tcPDF, mPDF, and domPDF initially), none of which will be bundled with PHPExcel, but which can be downloaded seperately from the appropriate sites.GhostBuster: GhostBuster Setup (91520): Added WMI based RestorePoint support Removed test code from program.cs Improved counting. Changed color of ghosted but unfiltered devices. Changed HwEntries into an ObservableCollection. Added Properties Form. Added Properties MenuItem to Context Menu. Added Hide Unfiltered Devices to Context Menu. If you like this tool, leave me a note, rate this project or write a review or Donate to Ghostbuster. Donate to GhostbusterEXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPie_V3.2: V3.2, 2012?5?19? ????ORACLE??????。AvalonDock: AvalonDock 2.0.0795: Welcome to the Beta release of AvalonDock 2.0 After 4 months of hard work I'm ready to upload the beta version of AvalonDock 2.0. This new version boosts a lot of new features and now is stable enough to be deployed in production scenarios. For this reason I encourage everyone is using AD 1.3 or earlier to upgrade soon to this new version. The final version is scheduled for the end of June. What is included in Beta: 1) Stability! thanks to all users contribution I’ve corrected a lot of issues...New ProjectsAits HRM: Human Resource ManagermentBeeWix Toolkit: The toolkit is a set of classes we are using here at Beewix for our Social Network project. Church Management: Simple Church Management Software By Lalit Kumar & Ivan Lewis.Client Center for ConfigurationManager: The tool is designed for IT Professionals to troubleshoot SCCM/CM12 Client related Issues. The Client Center for Configuration Manager provides a quick and easy overview of client settings, including running services and Agent settings in a good easy to use, user interface.Common Instance Factory: Provides an abstraction over dependency injection and IoC containers using the abstract factory design pattern. It was created as an alternative to the Common Service Locator, but it does not use the service location anti-pattern and it provides support for releasing instances. Adapters are available for various dependency injection containers, such as Ninject and SimpleInjector, with more to come shortly. There are also WCF extensions available for decoupling services from DI containers.ConfigMgrRegistrationRequest: ConfigMgrRegistrationRequest allows you to simulate a client using System Center 2012 Configuration Manager Client SDK Basically, this project allows you to create Fake CM12 Clients, ideal when you need test load, reports, etc... when you run the tool (as a local Administrator), it will - Open a csv file and send request to register a new client to sccm - Send a update client id to sccm - Request policy - Send a ddr message - Send hinv more info on my blog at http://wmug.co.uk/w...COSC 320 TWIN System v2.0: Migration of previous version due to one month server limitationCultiv Photometadata for Umbraco: The PhotoMetaData package will extract meta data from images that you upload in your Umbraco media section. Dependency Injection Service Provider (DISP): Dependency Injection Service Provider (DISP) is a wrapper or an interface that aim to allow .NET developers use one of the inversion of control (IoC) containers out there such as StructureMap or Ninject from a high level of abstraction, using the same interface and classes without having to worry about the concrete implementation of each of the IoC containers. DISP provides an interface that will create and instance of an IoC container of your election, and provide generic interface to co...DirectPOS.NET: DirectPOS is a set of classes that bypasses OPOS and other libraries to talk to POS devices directly currently focusing on support for Bixolon, Samsung, and Epson thermal printers and some other devices.Dodongo's Quest: Roguelike in C# and XNAEpicCms: epiccms is only a name. The underlying system consists of two components: an api data service that replicates database tables, views, and procedures directly by url routes; a web interface that is very lightweight and purposed only to directly edit database table data. It is the intention that users will have full control over their data, how it is distributed, who has access, and when or how long anything(anyone) has access to all single pieces of data. It is also the intention that users wi...Find Duplicate file: This application is developed in WPF. you can find duplicate files from the file impression not from file size of from file name. Although this process is very time consuming. You can search and delete the similar file from the application itself. you can also open file location and delete manually from windows explorer. HoleFilling: We present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which conta...ILCC: A C compiler made in C# that generates .NET CIL code, XML, YAML and .NET PInvokeKendo UI framework for Orchard: This is a common location for kendo UI framework and related script libraries to use with Orchard CMS.MASAS SharePoint: This project is all about creating a SharePoint 2010-based capability to use the Multi-Agency Situational Awareness System (of systems) - MASAS. MASAS is focused on the exchange of structured information. This module is focused on using that information in your SharePoint 2010-based applications. Think of it this way - MASAS is an information bucket. MASAS allows you to share structured information by putting information into the bucket and by pulling information out of the bucket. ...MCP History: History Module for CB websiteMediaWikiSPMigrator: This project is aimed to make it easier to migrate from Mediawiki wiki's into SharePoint. The big differentiator is that it will allow you to map templates into a specific content type in SharePoint. NConf - Advanced Configuration Manager: NConf is an advanced configuration system for .Net projects. It's written out of the need for more advanced configuration than what .Net provides. The key features it supports is multiple configuration sources, simple to use syntax, the ability to reload/update configuration at runtime, and the easy ability to implement custom configuration sources.NMEA Interpreter: NMEA Interpreter is a class library created for one of my projects. It's main function is to parse NMEA sentences into usable easy to read and display data.Occupado: Time management for the School enviroment.OmahaMTG Site: Omaha MTG SiteProject Bloodlust Fury: Group project for CIS 375Radar IoC container: Very simple, fast and easy to use IoC container. Don't have any dependencies on external libraries - just pure .NET 4 Client Profile. Minimal size (~18kB in release build) makes it suitable to use in any project type.Service Request System: Service Request System (SRS) is a lightweight application for submitting and managing Service Requests of any type. The application is written in ASP.net (C#), and can utilize any type of database.Sign In As A Different User: Running your browser (IE) in a corporate environment will give you single sign on to web applications running in your intranet. But in some cases you need to access an URL with different credentials (admin purpose, etc.). Applications like SharePoint will provide you a solution right out of the box, but if this is not available the SignInAsADifferentUser project may help you. We as Glück & Kanja Consulting AG deployed such configurations in relation to Microsoft Lync components. Searching the...SimIn - Simulate Input in Your .NET Applications: SimIn is a light-weight .NET library which allows You to send user input to another applications. You can use SimIn in two modes: simulating user input (SendMessages), or control Your mouse. It supports DirectX input simulation, which means you can send mouse or keyboard messages to any DirectX game you want, and the game will register it correctly. SocialVeris: Este projeto consiste em uma rede social da instituição de ensino Veris IBTA. É um projeto de conclusão do curso de Análise e Desenvolvimento de Sistemas.Sprocket: Web application to management tasks in small company. TheList: A web an mobile application oriented to users that need a support in the creation of the supermarket list.Turntable Enhanced: TT Enhanced is a Turntable.fm Chrome extension used for giving the Turntable.fm website a bit of a face-lift. It adds cosmetic changes to the website, room moderation, drop down lists for easier usability, and much more.Web Part Collector: WebPart collector will help you identify any webpart or all webparts that are located inside your site collection

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >