Search Results

Search found 1214 results on 49 pages for 'komp smith'.

Page 49/49 | < Previous Page | 45 46 47 48 49 

  • Copying one form's values to another form using JQuery

    - by rsturim
    I have a "shipping" form that I want to offer users the ability to copy their input values over to their "billing" form by simply checking a checkbox. I've coded up a solution that works -- but, I'm sort of new to jQuery and wanted some criticism on how I went about achieving this. Is this well done -- any refactorings you'd recommend? Any advice would be much appreciated! The Script <script type="text/javascript"> $(function() { $("#copy").click(function() { if($(this).is(":checked")){ var $allShippingInputs = $(":input:not(input[type=submit])", "form#shipping"); $allShippingInputs.each(function() { var billingInput = "#" + this.name.replace("ship", "bill"); $(billingInput).val($(this).val()); }) //console.log("checked"); } else { $(':input','#billing') .not(':button, :submit, :reset, :hidden') .val('') .removeAttr('checked') .removeAttr('selected'); //console.log("not checked") } }); }); </script> The Form <div> <form action="" method="get" name="shipping" id="shipping"> <fieldset> <legend>Shipping</legend> <ul> <li> <label for="ship_first_name">First Name:</label> <input type="text" name="ship_first_name" id="ship_first_name" value="John" size="" /> </li> <li> <label for="ship_last_name">Last Name:</label> <input type="text" name="ship_last_name" id="ship_last_name" value="Smith" size="" /> </li> <li> <label for="ship_state">State:</label> <select name="ship_state" id="ship_state"> <option value="RI">Rhode Island</option> <option value="VT" selected="selected">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="ship_zip_code">Zip Code</label> <input type="text" name="ship_zip_code" id="ship_zip_code" value="05401" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div> <div> <form action="" method="get" name="billing" id="billing"> <fieldset> <legend>Billing</legend> <ul> <li> <input type="checkbox" name="copy" id="copy" /> <label for="copy">Same of my shipping</label> </li> <li> <label for="bill_first_name">First Name:</label> <input type="text" name="bill_first_name" id="bill_first_name" value="" size="" /> </li> <li> <label for="bill_last_name">Last Name:</label> <input type="text" name="bill_last_name" id="bill_last_name" value="" size="" /> </li> <li> <label for="bill_state">State:</label> <select name="bill_state" id="bill_state"> <option>-- Choose State --</option> <option value="RI">Rhode Island</option> <option value="VT">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="bill_zip_code">Zip Code</label> <input type="text" name="bill_zip_code" id="bill_zip_code" value="" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div>

    Read the article

  • How to match ColdFusion encryption with Java 1.4.2?

    - by JohnTheBarber
    * sweet - thanks to Edward Smith for the CF Technote that indicated the key from ColdFusion was Base64 encoded. See generateKey() for the 'fix' My task is to use Java 1.4.2 to match the results a given ColdFusion code sample for encryption. Known/given values: A 24-byte key A 16-byte salt (IVorSalt) Encoding is Hex Encryption algorithm is AES/CBC/PKCS5Padding A sample clear-text value The encrypted value of the sample clear-text after going through the ColdFusion code Assumptions: Number of iterations not specified in the ColdFusion code so I assume only one iteration 24-byte key so I assume 192-bit encryption Given/working ColdFusion encryption code sample: <cfset ThisSalt = "16byte-salt-here"> <cfset ThisAlgorithm = "AES/CBC/PKCS5Padding"> <cfset ThisKey = "a-24byte-key-string-here"> <cfset thisAdjustedNow = now()> <cfset ThisDateTimeVar = DateFormat( thisAdjustedNow , "yyyymmdd" )> <cfset ThisDateTimeVar = ThisDateTimeVar & TimeFormat( thisAdjustedNow , "HHmmss" )> <cfset ThisTAID = ThisDateTimeVar & "|" & someOtherData> <cfset ThisTAIDEnc = Encrypt( ThisTAID , ThisKey , ThisAlgorithm , "Hex" , ThisSalt)> My Java 1.4.2 encryption/decryption code swag: package so.example; import java.security.*; import javax.crypto.Cipher; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.*; public class SO_AES192 { private static final String _AES = "AES"; private static final String _AES_CBC_PKCS5Padding = "AES/CBC/PKCS5Padding"; private static final String KEY_VALUE = "a-24byte-key-string-here"; private static final String SALT_VALUE = "16byte-salt-here"; private static final int ITERATIONS = 1; private static IvParameterSpec ivParameterSpec; public static String encryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.ENCRYPT_MODE, key, ivParameterSpec); String valueToEncrypt = null; String eValue = value; for (int i = 0; i < ITERATIONS; i++) { // valueToEncrypt = SALT_VALUE + eValue; // pre-pend salt - Length > sample length valueToEncrypt = eValue; // don't pre-pend salt Length = sample length byte[] encValue = c.doFinal(valueToEncrypt.getBytes()); eValue = Hex.encodeHexString(encValue); } return eValue; } public static String decryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.DECRYPT_MODE, key, ivParameterSpec); String dValue = null; char[] valueToDecrypt = value.toCharArray(); for (int i = 0; i < ITERATIONS; i++) { byte[] decordedValue = Hex.decodeHex(valueToDecrypt); byte[] decValue = c.doFinal(decordedValue); // dValue = new String(decValue).substring(SALT_VALUE.length()); // when salt is pre-pended dValue = new String(decValue); // when salt is not pre-pended valueToDecrypt = dValue.toCharArray(); } return dValue; } private static Key generateKey() throws Exception { // Key key = new SecretKeySpec(KEY_VALUE.getBytes(), _AES); // this was wrong Key key = new SecretKeySpec(new BASE64Decoder().decodeBuffer(keyValueString), _AES); // had to un-Base64 the 'known' 24-byte key. return key; } } I cannot create a matching encrypted value nor decrypt a given encrypted value. My guess is it's something to do with how I'm handling the initial vector/salt. I'm not very crypto-savvy but I'm thinking I should be able to take the sample clear-text and produce the same encrypted value in Java as ColdFusion produced. I am able to encrypt/decrypt my own data with my Java code (so I'm consistent) but I cannot match nor decrypt the ColdFusion sample encrypted value. I have access to a local webservice that can test the encrypted output. The given ColdFusion output sample passes/decrypts fine (of course). If I try to decrypt the same sample with my Java code (using the actual key and salt) I get a "Given final block not properly padded" error. I get the same net result when I pass my attempt at encryption (using the actual key and salt) to the test webservice. Any Ideas?

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    This article is a continuation of my previous entry where I explained how OIF/IdP leverages OAM to authenticate users at runtime: OIF/IdP internally forwards the user to OAM and indicates which Authentication Scheme should be used to challenge the user if needed OAM determine if the user should be challenged (user already authenticated, session timed out or not, session authentication level equal or higher than the level of the authentication scheme specified by OIF/IdP…) After identifying the user, OAM internally forwards the user back to OIF/IdP OIF/IdP can resume its operation In this article, I will discuss how OIF/IdP can be configured to map Federation Authentication Methods to OAM Authentication Schemes: When processing an Authn Request, where the SP requests a specific Federation Authentication Method with which the user should be challenged When sending an Assertion, where OIF/IdP sets the Federation Authentication Method in the Assertion Enjoy the reading! Overview The various Federation protocols support mechanisms allowing the partners to exchange information on: How the user should be challenged, when the SP/RP makes a request How the user was challenged, when the IdP/OP issues an SSO response When a remote SP partner redirects the user to OIF/IdP for Federation SSO, the message might contain data requesting how the user should be challenged by the IdP: this is treated as the Requested Federation Authentication Method. OIF/IdP will need to map that Requested Federation Authentication Method to a local Authentication Scheme, and then invoke OAM for user authentication/challenge with the mapped Authentication Scheme. OAM would authenticate the user if necessary with the scheme specified by OIF/IdP. Similarly, when an IdP issues an SSO response, most of the time it will need to include an identifier representing how the user was challenged: this is treated as the Federation Authentication Method. When OIF/IdP issues an Assertion, it will evaluate the Authentication Scheme with which OAM identified the user: If the Authentication Scheme can be mapped to a Federation Authentication Method, then OIF/IdP will use the result of that mapping in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled If the Authentication Scheme cannot be mapped, then OIF/IdP will set the Federation Authentication Method as the Authentication Scheme name in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled Mappings In OIF/IdP, the mapping between Federation Authentication Methods and Authentication Schemes has the following rules: One Federation Authentication Method can be mapped to several Authentication Schemes In a Federation Authentication Method <-> Authentication Schemes mapping, a single Authentication Scheme is marked as the default scheme that will be used to authenticate a user, if the SP/RP partner requests the user to be authenticated via a specific Federation Authentication Method An Authentication Scheme can be mapped to a single Federation Authentication Method Let’s examine the following example and the various use cases, based on the SAML 2.0 protocol: Mappings defined as: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapped to LDAPScheme, marked as the default scheme used for authentication BasicScheme urn:oasis:names:tc:SAML:2.0:ac:classes:X509 mapped to X509Scheme, marked as the default scheme used for authentication Use cases: SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:X509 as the RequestedAuthnContext: OIF/IdP will authenticate the use with X509Scheme since it is the default scheme mapped for that method. SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the RequestedAuthnContext: OIF/IdP will authenticate the use with LDAPScheme since it is the default scheme mapped for that method, not the BasicScheme SP did not request any specific methods, and user was authenticated with BasisScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with LDAPScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with BasisSessionlessScheme: OIF/IdP will issue an Assertion with BasisSessionlessScheme as the FederationAuthenticationMethod, since that scheme could not be mapped to any Federation Authentication Method (in this case, the administrator would need to correct that and create a mapping) Configuration Mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. Authentication Schemes As discussed in the previous article, during Federation SSO, OIF/IdP will internally forward the user to OAM for authentication/verification and specify which Authentication Scheme to use. OAM will determine if a user needs to be challenged: If the user is not authenticated yet If the user is authenticated but the session timed out If the user is authenticated, but the authentication scheme level of the original authentication is lower than the level of the authentication scheme requested by OIF/IdP So even though an SP requests a specific Federation Authentication Method to be used to challenge the user, if that method is mapped to an Authentication Scheme and that at runtime OAM deems that the user does not need to be challenged with that scheme (because the user is already authenticated, session did not time out, and the session authn level is equal or higher than the one for the specified Authentication Scheme), the flow won’t result in a challenge operation. Protocols SAML 2.0 The SAML 2.0 specifications define the following Federation Authentication Methods for SAML 2.0 flows: urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocol urn:oasis:names:tc:SAML:2.0:ac:classes:Telephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:PersonalTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:Smartcard urn:oasis:names:tc:SAML:2.0:ac:classes:Password urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocolPassword urn:oasis:names:tc:SAML:2.0:ac:classes:X509 urn:oasis:names:tc:SAML:2.0:ac:classes:TLSClient urn:oasis:names:tc:SAML:2.0:ac:classes:PGP urn:oasis:names:tc:SAML:2.0:ac:classes:SPKI urn:oasis:names:tc:SAML:2.0:ac:classes:XMLDSig urn:oasis:names:tc:SAML:2.0:ac:classes:SoftwarePKI urn:oasis:names:tc:SAML:2.0:ac:classes:Kerberos urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport urn:oasis:names:tc:SAML:2.0:ac:classes:SecureRemotePassword urn:oasis:names:tc:SAML:2.0:ac:classes:NomadTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:AuthenticatedTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:SmartcardPKI urn:oasis:names:tc:SAML:2.0:ac:classes:TimeSyncToken Out of the box, OIF/IdP has the following mappings for the SAML 2.0 protocol: Only urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml20-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 2.0 An example of an AuthnRequest message sent by an SP to an IdP with the SP requesting a specific Federation Authentication Method to be used to challenge the user would be: <samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="https://idp.com/oamfed/idp/samlv20" ID="id-8bWn-A9o4aoMl3Nhx1DuPOOjawc-" IssueInstant="2014-03-21T20:51:11Z" Version="2.0">  <saml:Issuer ...>https://acme.com/sp</saml:Issuer>  <samlp:NameIDPolicy AllowCreate="false" Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"/>  <samlp:RequestedAuthnContext Comparison="minimum">    <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">      urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport </saml:AuthnContextClassRef>  </samlp:RequestedAuthnContext></samlp:AuthnRequest> An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                    urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> An administrator would be able to specify a mapping between a SAML 2.0 Federation Authentication Method and one or more OAM Authentication Schemes SAML 1.1 The SAML 1.1 specifications define the following Federation Authentication Methods for SAML 1.1 flows: urn:oasis:names:tc:SAML:1.0:am:unspecified urn:oasis:names:tc:SAML:1.0:am:HardwareToken urn:oasis:names:tc:SAML:1.0:am:password urn:oasis:names:tc:SAML:1.0:am:X509-PKI urn:ietf:rfc:2246 urn:oasis:names:tc:SAML:1.0:am:PGP urn:oasis:names:tc:SAML:1.0:am:SPKI urn:ietf:rfc:3075 urn:oasis:names:tc:SAML:1.0:am:XKMS urn:ietf:rfc:1510 urn:ietf:rfc:2945 Out of the box, OIF/IdP has the following mappings for the SAML 1.1 protocol: Only urn:oasis:names:tc:SAML:1.0:am:password is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml11-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 1.1 An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameID ...>[email protected]</saml:NameID>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Note: SAML 1.1 does not define an AuthnRequest message. An administrator would be able to specify a mapping between a SAML 1.1 Federation Authentication Method and one or more OAM Authentication Schemes OpenID 2.0 The OpenID 2.0 PAPE specifications define the following Federation Authentication Methods for OpenID 2.0 flows: http://schemas.openid.net/pape/policies/2007/06/phishing-resistant http://schemas.openid.net/pape/policies/2007/06/multi-factor http://schemas.openid.net/pape/policies/2007/06/multi-factor-physical Out of the box, OIF/IdP does not define any mappings for the OpenID 2.0 Federation Authentication Methods. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. An example of an OpenID 2.0 Request message sent by an SP/RP to an IdP/OP would be: https://idp.com/openid?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=checkid_setup&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.realm=https%3A%2F%2Facme.com%2Fopenid&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_request&openid.ax.type.attr0=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.if_available=attr0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.max_auth_age=0 An example of an Open ID 2.0 SSO Response issued by an IdP/OP would be: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will provide examples on how to configure OIF/IdP for the various protocols, to map OAM Authentication Schemes to Federation Authentication Methods.Cheers,Damien Carru

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Nested property binding

    - by EtherealMonkey
    Recently, I have been trying to wrap my mind around the BindingList<T> and INotifyPropertChanged. More specifically - How do I make a collection of objects (having objects as properties) which will allow me to subscribe to events throughout the tree? To that end, I have examined the code offered as examples by others. One such project that I downloaded was Nested Property Binding - CodeProject by "seesharper". Now, the article explains the implementation, but there was a question by "Someone@AnotherWorld" about "INotifyPropertyChanged in nested objects". His question was: Hi, nice stuff! But after a couple of time using your solution I realize the ObjectBindingSource ignores the PropertyChanged event of nested objects. E.g. I've got a class 'Foo' with two properties named 'Name' and 'Bar'. 'Name' is a string an 'Bar' reference an instance of class 'Bar', which has a 'Name' property of type string too and both classes implements INotifyPropertyChanged. With your binding source reading and writing with both properties ('Name' and 'Bar_Name') works fine but the PropertyChanged event works only for the 'Name' property, because the binding source listen only for events of 'Foo'. One workaround is to retrigger the PropertyChanged event in the appropriate class (here 'Foo'). What's very unclean! The other approach would be to extend ObjectBindingSource so that all owner of nested property which implements INotifyPropertyChanged get used for receive changes, but how? Thanks! I had asked about BindingList<T> yesterday and received a good answer from Aaronaught. In my question, I had a similar point as "Someone@AnotherWorld": if Keywords were to implement INotifyPropertyChanged, would changes be accessible to the BindingList through the ScannedImage object? To which Aaronaught's response was: No, they will not. BindingList only looks at the specific object in the list, it has no ability to scan all dependencies and monitor everything in the graph (nor would that always be a good idea, if it were possible). I understand Aaronaught's comment regarding this behavior not necessarily being a good idea. Additionally, his suggestion to have my bound object "relay" events on behalf of it's member objects works fine and is perfectly acceptable. For me, "re-triggering" the PropertyChanged event does not seem so unclean as "Someone@AnotherWorld" laments. I do understand why he protests - in the interest of loosely coupled objects. However, I believe that coupling between objects that are part of a composition is logical and not so undesirable as this may be in other scenarios. (I am a newb, so I could be waaayyy off base.) Anyway, in the interest of exploring an answer to the question by "Someone@AnotherWorld", I altered the MainForm.cs file of the example project from Nested Property Binding - CodeProject by "seesharper" to the following: using System; using System.Collections.Generic; using System.ComponentModel; using System.Core.ComponentModel; using System.Windows.Forms; namespace ObjectBindingSourceDemo { public partial class MainForm : Form { private readonly List<Customer> _customers = new List<Customer>(); private readonly List<Product> _products = new List<Product>(); private List<Order> orders; public MainForm() { InitializeComponent(); dataGridView1.AutoGenerateColumns = false; dataGridView2.AutoGenerateColumns = false; CreateData(); } private void CreateData() { _customers.Add( new Customer(1, "Jane Wilson", new Address("98104", "6657 Sand Pointe Lane", "Seattle", "USA"))); _customers.Add( new Customer(1, "Bill Smith", new Address("94109", "5725 Glaze Drive", "San Francisco", "USA"))); _customers.Add( new Customer(1, "Samantha Brown", null)); _products.Add(new Product(1, "Keyboard", 49.99)); _products.Add(new Product(2, "Mouse", 10.99)); _products.Add(new Product(3, "PC", 599.99)); _products.Add(new Product(4, "Monitor", 299.99)); _products.Add(new Product(5, "LapTop", 799.99)); _products.Add(new Product(6, "Harddisc", 89.99)); customerBindingSource.DataSource = _customers; productBindingSource.DataSource = _products; orders = new List<Order>(); orders.Add(new Order(1, DateTime.Now, _customers[0])); orders.Add(new Order(2, DateTime.Now, _customers[1])); orders.Add(new Order(3, DateTime.Now, _customers[2])); #region Added by me OrderLine orderLine1 = new OrderLine(_products[0], 1); OrderLine orderLine2 = new OrderLine(_products[1], 3); orderLine1.PropertyChanged += new PropertyChangedEventHandler(OrderLineChanged); orderLine2.PropertyChanged += new PropertyChangedEventHandler(OrderLineChanged); orders[0].OrderLines.Add(orderLine1); orders[0].OrderLines.Add(orderLine2); #endregion // Removed by me in lieu of region above. //orders[0].OrderLines.Add(new OrderLine(_products[0], 1)); //orders[0].OrderLines.Add(new OrderLine(_products[1], 3)); ordersBindingSource.DataSource = orders; } #region Added by me // Have to wait until the form is Shown to wire up the events // for orderDetailsBindingSource. Otherwise, they are triggered // during MainForm().InitializeComponent(). private void MainForm_Shown(object sender, EventArgs e) { orderDetailsBindingSource.AddingNew += new AddingNewEventHandler(orderDetailsBindSrc_AddingNew); orderDetailsBindingSource.CurrentItemChanged += new EventHandler(orderDetailsBindSrc_CurrentItemChanged); orderDetailsBindingSource.ListChanged += new ListChangedEventHandler(orderDetailsBindSrc_ListChanged); } private void orderDetailsBindSrc_AddingNew( object sender, AddingNewEventArgs e) { } private void orderDetailsBindSrc_CurrentItemChanged( object sender, EventArgs e) { } private void orderDetailsBindSrc_ListChanged( object sender, ListChangedEventArgs e) { ObjectBindingSource bindingSource = (ObjectBindingSource)sender; if (!(bindingSource.Current == null)) { // Unsure if GetType().ToString() is required b/c ToString() // *seems* // to return the same value. if (bindingSource.Current.GetType().ToString() == "ObjectBindingSourceDemo.OrderLine") { if (e.ListChangedType == ListChangedType.ItemAdded) { // I wish that I knew of a way to determine // if the 'PropertyChanged' delegate assignment is null. // I don't like the current test, but it seems to work. if (orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].Product == null) { orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].PropertyChanged += new PropertyChangedEventHandler( OrderLineChanged); } } if (e.ListChangedType == ListChangedType.ItemDeleted) { // Will throw exception when leaving // an OrderLine row with unitialized properties. // // I presume this is because the item // has already been 'disposed' of at this point. // *but* // Will it be actually be released from memory // if the delegate assignment for PropertyChanged // was never removed??? if (orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].Product != null) { orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].PropertyChanged -= new PropertyChangedEventHandler( OrderLineChanged); } } } } } private void OrderLineChanged(object sender, PropertyChangedEventArgs e) { MessageBox.Show(e.PropertyName, "Property Changed:"); } #endregion } } In the method private void orderDetailsBindSrc_ListChanged(object sender, ListChangedEventArgs e) I am able to hook up the PropertyChangedEventHandler to the OrderLine object as it is being created. However, I cannot seem to find a way to unhook the PropertyChangedEventHandler from the OrderLine object before it is being removed from the orders[i].OrderLines list. So, my questions are: Am I simply trying to do something that is very, very wrong here? Will the OrderLines object that I add the delegate assignments to ever be released from memory if the assignment is not removed? Is there a "sane" method of achieving this scenario? Also, note that this question is not specifically related to my prior. I have actually solved the issue which had prompted me to inquire before. However, I have reached a point with this particular topic of discovery where my curiosity has exceeded my patience - hopefully someone here can shed some light on this?

    Read the article

  • Please help me understand why my XSL Transform is not transforming

    - by Damovisa
    I'm trying to transform one XML format to another using XSL. Try as I might, I can't seem to get a result. I've hacked away at this for a while now and I've had no success. I'm not even getting any exceptions. I'm going to post the entire code and hopefully someone can help me work out what I've done wrong. I'm aware there are likely to be problems in the xsl I have in terms of selects and matches, but I'm not fussed about that at the moment. The output I'm getting is the input XML without any XML tags. The transformation is simply not occurring. Here's my XML Document: <?xml version="1.0"?> <Transactions> <Account> <PersonalAccount> <AccountNumber>066645621</AccountNumber> <AccountName>A Smith</AccountName> <CurrentBalance>-200125.96</CurrentBalance> <AvailableBalance>0</AvailableBalance> <AccountType>LOAN</AccountType> </PersonalAccount> </Account> <StartDate>2010-03-01T00:00:00</StartDate> <EndDate>2010-03-23T00:00:00</EndDate> <Items> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>12000</Amount> <Reference>Transaction 1</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-15T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>11000</Amount> <Reference>Transaction 2</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-14T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> </Items> </Transactions> Here's my XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" /> <xsl:param name="currentdate"></xsl:param> <xsl:template match="Transactions"> <xsl:element name="OFX"> <xsl:element name="SIGNONMSGSRSV1"> <xsl:element name="SONRS"> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="DTSERVER"><xsl:value-of select="$currentdate" /></xsl:element> <xsl:element name="LANGUAGE">ENG</xsl:element> </xsl:element> </xsl:element> <xsl:element name="BANKMSGSRSV1"> <xsl:element name="STMTTRNRS"> <xsl:element name="TRNUID">1</xsl:element> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="STMTRS"> <xsl:element name="CURDEF">AUD</xsl:element> <xsl:element name="BANKACCTFROM"> <xsl:element name="BANKID">RAMS</xsl:element> <xsl:element name="ACCTID"><xsl:value-of select="Account/PersonalAccount/AccountNumber" /></xsl:element> <xsl:element name="ACCTTYPE"><xsl:value-of select="Account/PersonalAccount/AccountType" /></xsl:element> </xsl:element> <xsl:element name="BANKTRANLIST"> <xsl:element name="DTSTART"><xsl:value-of select="StartDate" /></xsl:element> <xsl:element name="DTEND"><xsl:value-of select="EndDate" /></xsl:element> <xsl:for-each select="Items/Transaction"> <xsl:element name="STMTTRN"> <xsl:element name="TRNTYPE"><xsl:choose><xsl:when test="IsCredit">CREDIT</xsl:when><xsl:otherwise>DEBIT</xsl:otherwise></xsl:choose></xsl:element> <xsl:element name="DTPOSTED"><xsl:value-of select="EffectiveDate" /></xsl:element> <xsl:element name="DTUSER"><xsl:value-of select="CreatedDate" /></xsl:element> <xsl:element name="TRNAMT"><xsl:value-of select="Amount" /></xsl:element> <xsl:element name="FITID" /> <xsl:element name="NAME"><xsl:value-of select="Reference" /></xsl:element> <xsl:element name="MEMO"><xsl:value-of select="Reference" /></xsl:element> </xsl:element> </xsl:for-each> </xsl:element> <xsl:element name="LEDGERBAL"> <xsl:element name="BALAMT"><xsl:value-of select="Account/PersonalAccount/CurrentBalance" /></xsl:element> <xsl:element name="DTASOF"><xsl:value-of select="EndDate" /></xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:template> </xsl:stylesheet> Here's my method to transform my XML: public string TransformToXml(XmlElement xmlElement, Dictionary<string, object> parameters) { string strReturn = ""; // Load the XSLT Document XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load(xsltFileName); // arguments XsltArgumentList args = new XsltArgumentList(); if (parameters != null && parameters.Count > 0) { foreach (string key in parameters.Keys) { args.AddParam(key, "", parameters[key]); } } //Create a memory stream to write to Stream objStream = new MemoryStream(); // Apply the transform xslt.Transform(xmlElement, args, objStream); objStream.Seek(0, SeekOrigin.Begin); // Read the contents of the stream StreamReader objSR = new StreamReader(objStream); strReturn = objSR.ReadToEnd(); return strReturn; } The contents of strReturn is an XML tag (<?xml version="1.0" encoding="utf-8"?>) followed by a raw dump of the contents of the original XML document, stripped of XML tags. What am I doing wrong here?

    Read the article

  • How can I add headers to DualList control wpf

    - by devnet247
    Hi all I am trying to write a Dual List usercontrol in wpf. I am new to wpf and I am finding it quite difficult. This is something I have put together in a couple of hours.It's not that good but a start. I would be extremely grateful if somebody with wpf experience could improve it. The aim is to simplify the usage as much as possible I am kind of stuck. I would like the user of the DualList Control to be able to set up headers how do you do that. Do I need to expose some dependency properties in my control? At the moment when loading the user has to pass a ObservableCollection is there a better way? Could you have a look and possibly make any suggestions with some code? Thanks a lot!!!!! xaml <Grid ShowGridLines="False"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="25px"></ColumnDefinition> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <StackPanel Orientation="Vertical" Grid.Column="0" Grid.Row="0"> <Label Name="lblLeftTitle" Content="Available"></Label> <ListView Name="lvwLeft"> </ListView> </StackPanel> <WrapPanel Grid.Column="1" Grid.Row="0"> <Button Name="btnMoveRight" Content=">" Width="25" Margin="0,35,0,0" Click="btnMoveRight_Click" /> <Button Name="btnMoveAllRight" Content=">>" Width="25" Margin="0,05,0,0" Click="btnMoveAllRight_Click" /> <Button Name="btnMoveLeft" Content="&lt;" Width="25" Margin="0,25,0,0" Click="btnMoveLeft_Click" /> <Button Name="btnMoveAllLeft" Content="&lt;&lt;" Width="25" Margin="0,05,0,0" Click="btnMoveAllLeft_Click" /> </WrapPanel> <StackPanel Orientation="Vertical" Grid.Column="2" Grid.Row="0"> <Label Name="lblRightTitle" Content="Selected"></Label> <ListView Name="lvwRight"> </ListView> </StackPanel> </Grid> Client public partial class DualListTest { public ObservableCollection<ListViewItem> LeftList { get; set; } public ObservableCollection<ListViewItem> RightList { get; set; } public DualListTest() { InitializeComponent(); LoadCustomers(); LoadDualList(); } private void LoadDualList() { dualList1.Load(LeftList, RightList); } private void LoadCustomers() { //Pretend we are getting a list of Customers from a repository. //Some go in the left List(Good Customers) some go in the Right List(Bad Customers). LeftList = new ObservableCollection<ListViewItem>(); RightList = new ObservableCollection<ListViewItem>(); var customers = GetCustomers(); foreach (var customer in customers) { if (customer.Status == CustomerStatus.Good) { LeftList.Add(new ListViewItem { Content = customer }); } else { RightList.Add(new ListViewItem{Content=customer }); } } } private static IEnumerable<Customer> GetCustomers() { return new List<Customer> { new Customer {Name = "Jo Blogg", Status = CustomerStatus.Good}, new Customer {Name = "Rob Smith", Status = CustomerStatus.Good}, new Customer {Name = "Michel Platini", Status = CustomerStatus.Good}, new Customer {Name = "Roberto Baggio", Status = CustomerStatus.Good}, new Customer {Name = "Gio Surname", Status = CustomerStatus.Bad}, new Customer {Name = "Diego Maradona", Status = CustomerStatus.Bad} }; } } UserControl public partial class DualList:UserControl { public ObservableCollection<ListViewItem> LeftListCollection { get; set; } public ObservableCollection<ListViewItem> RightListCollection { get; set; } public DualList() { InitializeComponent(); } public void Load(ObservableCollection<ListViewItem> leftListCollection, ObservableCollection<ListViewItem> rightListCollection) { LeftListCollection = leftListCollection; RightListCollection = rightListCollection; lvwLeft.ItemsSource = leftListCollection; lvwRight.ItemsSource = rightListCollection; EnableButtons(); } public static DependencyProperty LeftTitleProperty = DependencyProperty.Register("LeftTitle", typeof(string), typeof(Label)); public static DependencyProperty RightTitleProperty = DependencyProperty.Register("RightTitle", typeof(string), typeof(Label)); public static DependencyProperty LeftListProperty = DependencyProperty.Register("LeftList", typeof(ListView), typeof(DualList)); public static DependencyProperty RightListProperty = DependencyProperty.Register("RightList", typeof(ListView), typeof(DualList)); public string LeftTitle { get { return (string)lblLeftTitle.Content; } set { lblLeftTitle.Content = value; } } public string RightTitle { get { return (string)lblRightTitle.Content; } set { lblRightTitle.Content = value; } } public ListView LeftList { get { return lvwLeft; } set { lvwLeft = value; } } public ListView RightList { get { return lvwRight; } set { lvwRight = value; } } private void EnableButtons() { if (lvwLeft.Items.Count > 0) { btnMoveRight.IsEnabled = true; btnMoveAllRight.IsEnabled = true; } else { btnMoveRight.IsEnabled = false; btnMoveAllRight.IsEnabled = false; } if (lvwRight.Items.Count > 0) { btnMoveLeft.IsEnabled = true; btnMoveAllLeft.IsEnabled = true; } else { btnMoveLeft.IsEnabled = false; btnMoveAllLeft.IsEnabled = false; } if (lvwLeft.Items.Count != 0 || lvwRight.Items.Count != 0) return; btnMoveLeft.IsEnabled = false; btnMoveAllLeft.IsEnabled = false; btnMoveRight.IsEnabled = false; btnMoveAllRight.IsEnabled = false; } private void MoveRight() { while (lvwLeft.SelectedItems.Count > 0) { var selectedItem = (ListViewItem)lvwLeft.SelectedItem; LeftListCollection.Remove(selectedItem); RightListCollection.Add(selectedItem); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveAllRight() { while (lvwLeft.Items.Count > 0) { var item = (ListViewItem)lvwLeft.Items[lvwLeft.Items.Count - 1]; LeftListCollection.Remove(item); RightListCollection.Add(item); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveAllLeft() { while (lvwRight.Items.Count > 0) { var item = (ListViewItem)lvwRight.Items[lvwRight.Items.Count - 1]; RightListCollection.Remove(item); LeftListCollection.Add(item); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveLeft() { while (lvwRight.SelectedItems.Count > 0) { var selectedCustomer = (ListViewItem)lvwRight.SelectedItem; LeftListCollection.Add(selectedCustomer); RightListCollection.Remove(selectedCustomer); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void btnMoveLeft_Click(object sender, RoutedEventArgs e) { MoveLeft(); } private void btnMoveAllLeft_Click(object sender, RoutedEventArgs e) { MoveAllLeft(); } private void btnMoveRight_Click(object sender, RoutedEventArgs e) { MoveRight(); } private void btnMoveAllRight_Click(object sender, RoutedEventArgs e) { MoveAllRight(); } }

    Read the article

  • Array help Index out of range exeption was unhandled

    - by Michael Quiles
    I am trying to populate combo boxes from a text file using comma as a delimiter everything was working fine, but now when I debug I get the "Index out of range exeption was unhandled" warning. I guess I need a fresh pair of eyes to see where I went wrong, I commented on the line that gets the error //Fname = fields[1]; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Printing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace Sullivan_Payroll { public partial class xEmpForm : Form { bool complete = false; public xEmpForm() { InitializeComponent(); } private void xEmpForm_Resize(object sender, EventArgs e) { this.xCenterPanel.Left = Convert.ToInt16((this.Width - this.xCenterPanel.Width) / 2); this.xCenterPanel.Top = Convert.ToInt16((this.Height - this.xCenterPanel.Height) / 2); Refresh(); } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { //Exits the application this.Close(); } private void xEmpForm_FormClosing(object sender, FormClosingEventArgs e) //use this on xtrip calculator { DialogResult Response; if (complete == true) { Application.Exit(); } else { Response = MessageBox.Show("Are you sure you want to Exit?", "Exit", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2); if (Response == DialogResult.No) { complete = false; e.Cancel = true; } else { complete = true; Application.Exit(); } } } private void xEmpForm_Load(object sender, EventArgs e) { //file sources string fileDept = "source\\Department.txt"; string fileSex = "source\\Sex.txt"; string fileStatus = "source\\Status.txt"; if (File.Exists(fileDept)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileDept)) { string dept = ""; while ((dept = sr.ReadLine()) != null) { this.xDeptComboBox.Items.Add(dept); } } } else { MessageBox.Show("The Department file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileSex)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileSex)) { string sex = ""; while ((sex = sr.ReadLine()) != null) { this.xSexComboBox.Items.Add(sex); } } } else { MessageBox.Show("The Sex file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileStatus)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileStatus)) { string status = ""; while ((status = sr.ReadLine()) != null) { this.xStatusComboBox.Items.Add(status); } } } else { MessageBox.Show("The Status file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } private void xFileSaveMenuItem_Click(object sender, EventArgs e) { { const string fileNew = "source\\New Staff.txt"; string recordIn; FileStream outFile = new FileStream(fileNew, FileMode.Create, FileAccess.Write); StreamWriter writer = new StreamWriter(outFile); for (int count = 0; count <= this.xEmployeeListBox.Items.Count - 1; count++) { this.xEmployeeListBox.SelectedIndex = count; recordIn = this.xEmployeeListBox.SelectedItem.ToString(); writer.WriteLine(recordIn); } writer.Close(); outFile.Close(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; MessageBox.Show("your file is saved"); } } private void xViewFacultyMenuItem_Click(object sender, EventArgs e) { const string fileStaff = "source\\Staff.txt"; const char DELIM = ','; string Lname, Fname, Depart, Stat, Sex, Salary, cDept, cStat, cSex; double Gtotal; string recordIn; string[] fields; cDept = this.xDeptComboBox.SelectedItem.ToString(); cStat = this.xStatusComboBox.SelectedItem.ToString(); cSex = this.xSexComboBox.SelectedItem.ToString(); FileStream inFile = new FileStream(fileStaff, FileMode.Open, FileAccess.Read); StreamReader reader = new StreamReader(inFile); recordIn = reader.ReadLine(); while (recordIn != null) { fields = recordIn.Split(DELIM); Lname = fields[0]; Fname = fields[1]; // this is where the error appears Depart = fields[2]; Stat = fields[3]; Sex = fields[4]; Salary = fields[5]; Fname = fields[1].TrimStart(null); Depart = fields[2].TrimStart(null); Stat = fields[3].TrimStart(null); Sex = fields[4].TrimStart(null); Salary = fields[5].TrimStart(null); Gtotal = double.Parse(Salary); if (Depart == cDept && cStat == Stat && cSex == Sex) { this.xEmployeeListBox.Items.Add(recordIn); } recordIn = reader.ReadLine(); } reader.Close(); inFile.Close(); if (this.xEmployeeListBox.Items.Count >= 1) { this.xFileSaveMenuItem.Enabled = true; this.xFilePrintMenuItem.Enabled = true; this.xEditClearMenuItem.Enabled = true; } else { this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; MessageBox.Show("Records not found"); } } private void xEditClearMenuItem_Click(object sender, EventArgs e) { this.xEmployeeListBox.Items.Clear(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; } } } Source file -- Anderson, Kristen, Accounting, Assistant, Female, 43155 Ball, Robin, Accounting, Instructor, Female, 42723 Chin, Roger, Accounting, Full, Male,59281 Coats, William, Accounting, Assistant, Male, 45371 Doepke, Cheryl, Accounting, Full, Female, 52105 Downs, Clifton, Accounting, Associate, Male, 46887 Garafano, Karen, Finance, Associate, Female, 49000 Hill, Trevor, Management, Instructor, Male, 38590 Jackson, Carole, Accounting, Instructor, Female, 38781 Jacobson, Andrew, Management, Full, Male, 56281 Lewis, Karl, Management, Associate, Male, 48387 Mack, Kevin, Management, Assistant, Male, 45000 McKaye, Susan, Management, Instructor, Female, 43979 Nelsen, Beth, Finance, Full, Female, 52339 Nelson, Dale, Accounting, Full, Male, 54578 Palermo, Sheryl, Accounting, Associate, Female, 45617 Rais, Mary, Finance, Instructor, Female, 27000 Scheib, Earl, Management, Instructor, Male, 37389 Smith, Tom, Finance, Full, Male, 57167 Smythe, Janice, Management, Associate, Female, 46887 True, David, Accounting, Full, Male, 53181 Young, Jeff, Management, Assistant, Male, 43513

    Read the article

  • In Flex, how to drag a component into a column of DataGrid (not the whole DataGrid)?

    - by Yousui
    Hi guys, I have a custom component: <?xml version="1.0" encoding="utf-8"?> <s:Group xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx"> <fx:Declarations> </fx:Declarations> <fx:Script> <![CDATA[ [Bindable] public var label:String = "don't know"; [Bindable] public var imageName:String = "x.gif"; ]]> </fx:Script> <s:HGroup paddingLeft="8" paddingTop="8" paddingRight="8" paddingBottom="8"> <mx:Image id="img" source="assets/{imageName}" /> <s:Label text="{label}"/> </s:HGroup> </s:Group> and a custom render, which will be used in my DataGrid: <?xml version="1.0" encoding="utf-8"?> <s:MXDataGridItemRenderer xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" focusEnabled="true" xmlns:components="components.*"> <s:VGroup> <components:Person label="{dataGridListData.label}"> </components:Person> </s:VGroup> </s:MXDataGridItemRenderer> This is my application: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns:services="services.*"> <s:layout> <s:VerticalLayout/> </s:layout> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.controls.Image; import mx.rpc.events.ResultEvent; import mx.utils.ArrayUtil; ]]> </fx:Script> <fx:Declarations> <fx:XMLList id="employees"> <employee> <name>Christina Coenraets</name> <phone>555-219-2270</phone> <email>[email protected]</email> <active>true</active>  </employee> <employee> <name>Joanne Wall</name> <phone>555-219-2012</phone> <email>[email protected]</email> <active>true</active>  </employee> <employee> <name>Maurice Smith</name> <phone>555-219-2012</phone> <email>[email protected]</email> <active>false</active>  </employee> <employee> <name>Mary Jones</name> <phone>555-219-2000</phone> <email>[email protected]</email> <active>true</active>  </employee> </fx:XMLList> </fx:Declarations> <s:HGroup> <mx:DataGrid dataProvider="{employees}" width="100%" dropEnabled="true"> <mx:columns> <mx:DataGridColumn headerText="Employee Name" dataField="name"/> <mx:DataGridColumn headerText="Email" dataField="email"/> <mx:DataGridColumn headerText="Image" dataField="image" itemRenderer="renderers.render1"/> </mx:columns> </mx:DataGrid> <s:List dragEnabled="true" dragMoveEnabled="false"> <s:dataProvider> <s:ArrayCollection> <fx:String>aaa</fx:String> <fx:String>bbb</fx:String> <fx:String>ccc</fx:String> <fx:String>ddd</fx:String> </s:ArrayCollection> </s:dataProvider> </s:List> </s:HGroup> </s:Application> Now what I want to do is let the user drag an one or more item from the left List component and drop at the third column of the DataGrid, then using the dragged data to create another <components:Person /> object. So in the final result, maybe the first line contains just one <components:Person /> object at the third column, the second line contains two <components:Person /> object at the third column and so on. Can this be implemented in Flex? How? Great thanks.

    Read the article

  • I would like to filter XSL output based on a Radio button selection

    - by Phil Speth
    Here is my example I am trying to filter by year based on user selection: I assume some js or jQuery code would be needed: XML file: <?xml version="1.0" encoding="ISO-8859-1"?> <catalog> <cd> <title>Empire Burlesque3</title> <artist>Bob Dylan</artist> <country>USA</country> <company>Columbia</company> <price>10.90</price> <year>1985</year> </cd> <cd> <title>Hide your heart</title> <artist>Bonnie Tyler</artist> <country>UK</country> <company>CBS Records</company> <price>9.90</price> <year>1988</year> </cd> <cd> <title>Greatest Hits</title> <artist>Dolly Parton</artist> <country>USA</country> <company>RCA</company> <price>9.90</price> <year>1982</year> </cd> <cd> <title>Still got the blues</title> <artist>Gary Moore</artist> <country>UK</country> <company>Virgin records</company> <price>10.20</price> <year>1990</year> </cd> <cd> <title>Eros</title> <artist>Eros Ramazzotti</artist> <country>EU</country> <company>BMG</company> <price>9.90</price> <year>1997</year> </cd> <cd> <title>One night only</title> <artist>Bee Gees</artist> <country>UK</country> <company>Polydor</company> <price>10.90</price> <year>1998</year> </cd> <cd> <title>Sylvias Mother</title> <artist>Dr.Hook</artist> <country>UK</country> <company>CBS</company> <price>8.10</price> <year>1973</year> </cd> <cd> <title>Maggie May</title> <artist>Rod Stewart</artist> <country>UK</country> <company>Pickwick</company> <price>8.50</price> <year>1990</year> </cd> <cd> <title>Romanza</title> <artist>Andrea Bocelli</artist> <country>EU</country> <company>Polydor</company> <price>10.80</price> <year>1996</year> </cd> <cd> <title>When a man loves a woman</title> <artist>Percy Sledge</artist> <country>USA</country> <company>Atlantic</company> <price>8.70</price> <year>1987</year> </cd> <cd> <title>Black angel</title> <artist>Savage Rose</artist> <country>EU</country> <company>Mega</company> <price>10.90</price> <year>1995</year> </cd> <cd> <title>1999 Grammy Nominees</title> <artist>Many</artist> <country>USA</country> <company>Grammy</company> <price>10.20</price> <year>1999</year> </cd> <cd> <title>For the good times</title> <artist>Kenny Rogers</artist> <country>UK</country> <company>Mucik Master</company> <price>8.70</price> <year>1995</year> </cd> <cd> <title>Big Willie style</title> <artist>Will Smith</artist> <country>USA</country> <company>Columbia</company> <price>9.90</price> <year>1997</year> </cd> <cd> <title>Tupelo Honey</title> <artist>Van Morrison</artist> <country>UK</country> <company>Polydor</company> <price>8.20</price> <year>1971</year> </cd> <cd> <title>Soulsville</title> <artist>Jorn Hoel</artist> <country>Norway</country> <company>WEA</company> <price>7.90</price> <year>1996</year> </cd> <cd> <title>The very best of</title> <artist>Cat Stevens</artist> <country>UK</country> <company>Island</company> <price>8.90</price> <year>1990</year> </cd> <cd> <title>Stop</title> <artist>Sam Brown</artist> <country>UK</country> <company>A and M</company> <price>8.90</price> <year>1988</year> </cd> <cd> <title>Bridge of Spies</title> <artist>T`Pau</artist> <country>UK</country> <company>Siren</company> <price>7.90</price> <year>1987</year> </cd> <cd> <title>Private Dancer</title> <artist>Tina Turner</artist> <country>UK</country> <company>Capitol</company> <price>8.90</price> <year>1983</year> </cd> <cd> <title>Midt om natten</title> <artist>Kim Larsen</artist> <country>EU</country> <company>Medley</company> <price>7.80</price> <year>1983</year> </cd> <cd> <title>Pavarotti Gala Concert</title> <artist>Luciano Pavarotti</artist> <country>UK</country> <company>DECCA</company> <price>9.90</price> <year>1991</year> </cd> <cd> <title>The dock of the bay</title> <artist>Otis Redding</artist> <country>USA</country> <company>Atlantic</company> <price>7.90</price> <year>1987</year> </cd> <cd> <title>Picture book</title> <artist>Simply Red</artist> <country>EU</country> <company>Elektra</company> <price>7.20</price> <year>1985</year> </cd> <cd> <title>Red</title> <artist>The Communards</artist> <country>UK</country> <company>London</company> <price>7.80</price> <year>1987</year> </cd> <cd> <title>Unchain my heart</title> <artist>Joe Cocker</artist> <country>USA</country> <company>EMI</company> <price>8.20</price> <year>1987</year> </cd> </catalog> XSL File: <?xml version="1.0" encoding="ISO-8859-1"?> <!-- Edited by XMLSpy® --> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <body> <input type="radio" name="Cost" value="1980" checked="checked" /> 1980 <input type="radio" name="Cost" value="1990" /> 1990 <h2>My CD Collection</h2> <table border="1"> <tr bgcolor="#9acd32"> <th>Title</th> <th>Artist</th> </tr> <xsl:for-each select="catalog/cd"> <xsl:if test="year>1990"> <tr> <td><xsl:value-of select="title"/></td> <td><xsl:value-of select="artist"/></td> <td><xsl:value-of select="year"/></td> </tr> </xsl:if> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet>

    Read the article

< Previous Page | 45 46 47 48 49