Search Results

Search found 2679 results on 108 pages for 'peter smith'.

Page 107/108 | < Previous Page | 103 104 105 106 107 108  | Next Page >

  • w3schools xsd example won't work with dom4j. How do I use dom4j to validate xml using xsds?

    - by HappyEngineer
    I am trying to use dom4j to validate the xml at http://www.w3schools.com/Schema/schema_example.asp using the xsd from that same page. It fails with the following error: org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'shiporder'. I'm using the following code: SAXReader reader = new SAXReader(); reader.setValidation(true); reader.setFeature("http://apache.org/xml/features/validation/schema", true); reader.setErrorHandler(new XmlErrorHandler()); reader.read(in); where in is an InputStream and XmlErrorHandler is a simple class that just logs all errors. I'm using the following xml file: <?xml version="1.0" encoding="ISO-8859-1"?> <shiporder orderid="889923" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="test1.xsd"> <orderperson>John Smith</orderperson> <shipto> <name>Ola Nordmann</name> <address>Langgt 23</address> <city>4000 Stavanger</city> <country>Norway</country> </shipto> <item> <title>Empire Burlesque</title> <note>Special Edition</note> <quantity>1</quantity> <price>10.90</price> </item> <item> <title>Hide your heart</title> <quantity>1</quantity> <price>9.90</price> </item> </shiporder> and the corresponding xsd: <?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:simpleType name="stringtype"> <xs:restriction base="xs:string"/> </xs:simpleType> <xs:simpleType name="inttype"> <xs:restriction base="xs:positiveInteger"/> </xs:simpleType> <xs:simpleType name="dectype"> <xs:restriction base="xs:decimal"/> </xs:simpleType> <xs:simpleType name="orderidtype"> <xs:restriction base="xs:string"> <xs:pattern value="[0-9]{6}"/> </xs:restriction> </xs:simpleType> <xs:complexType name="shiptotype"> <xs:sequence> <xs:element name="name" type="stringtype"/> <xs:element name="address" type="stringtype"/> <xs:element name="city" type="stringtype"/> <xs:element name="country" type="stringtype"/> </xs:sequence> </xs:complexType> <xs:complexType name="itemtype"> <xs:sequence> <xs:element name="title" type="stringtype"/> <xs:element name="note" type="stringtype" minOccurs="0"/> <xs:element name="quantity" type="inttype"/> <xs:element name="price" type="dectype"/> </xs:sequence> </xs:complexType> <xs:complexType name="shipordertype"> <xs:sequence> <xs:element name="orderperson" type="stringtype"/> <xs:element name="shipto" type="shiptotype"/> <xs:element name="item" maxOccurs="unbounded" type="itemtype"/> </xs:sequence> <xs:attribute name="orderid" type="orderidtype" use="required"/> </xs:complexType> <xs:element name="shiporder" type="shipordertype"/> </xs:schema> The xsd and xml file are in the same directory. What is the problem?

    Read the article

  • CodePlex Daily Summary for Wednesday, May 26, 2010

    CodePlex Daily Summary for Wednesday, May 26, 2010New Projects3D File Manager: 3D File manager is an application that aims to show how could look file manager in 3D. It´s developed in C# and XNA frameworkAcies: Acies is a dungeon crawler game done with C# and XNA.ActiveWinery: The open source winery and vineyard application.CC.Yacht: CC.Yacht is a client/server yacht dice game written in C# .NET. It utilizes a net.tcp WCF duplex service for client/server communication.Community Forums NNTP bridge: Community project for accessing the MS Web-Forums via an open source NNTP newsserver (bridge).Dojo Timer: WPF timer for Coding Dojo meetings. Timer feito em WPF para Coding Dojo.GameFX - The Game Development Framework: The Game Development Framework (GameFX) is simply a set of libraries to be used as the foundation for any simple 2D tile-based game. It can be used...Greg Roberts MVC Extensions: Asp.Net MVC Extensions including JSONP ActionResult. Targeted for MVC 2 and .NET 4.0.IIS Deploy: Project to develop a tool that automates the deploy Web sites and WCF services in single server environments and clustered.MarkLogic Sample Authoring App for Word: The MarkLogic Authoring Sample App for Word lets authors enrich Word documents using Content Controls, associate and manage metadata with those Con...Mono.Addins: Mono.Addins is a framework for creating extensible applications, and for creating add-ins which extend applications.MPCLI: MPCLI is a library that brings the power of the GNU MP big numbers library to those who use CLS-compliant languages such as C#, F#, and Visual Basi...NTFS parser classes: This is a C++ library to help parsing an NTFS volume, as well as file records and attributes. It will facilitate much when handling NTFS filesystem...Oddworld Level Gen: A 2D platform game, with Oddworld : Abe's Oddysee asset. The game introduce a dynamic system to generate the next level according to the previous l...Page Action Web Part for SharePoint 2007: This Web Part for SharePoint 2007 allows you to perform actions (such as causing an "Access is denied", redirect to another web page, view content ...Piggy Bank: Piggy Bank is a web-based financial application targetted towards kids.Productivity Hub Solutions: The Productivity Hub 2010 is a customizable, on-premise training solution for technology products. Developed by RedTech for Microsoft, the Producti...PyQt port of TortoiseHg: PyQt port of TortoiseHg (aka TortoiseHg 2.0)Releaser™: This is my private project. Currently, I'm not going to support it publicly.SLManagers: SLManagers 用于动态加载组件 实现对程序不同的的管理Smith Async .NET Memcached Client: Async .NET Memcached Client is a fully asynchronous implementation of a memcached client. The advantage of a fully asynchronous client is that you...Tauck Public API: Tauck's public API allows for travel agencies and other parterners to use Tauck's product information in their websites and other systems. Virtualizing WrapPanel: Virtualizing WrapPanel improves performance when binding a ListBox/ListView to a large amount of data. It is written in C#New Releases3D File Manager: 3D File manager: 3D File managerAragon Online Client: Aragon Online Client: The executable version of the Aragon Online Client can be installed from the Aragon Online page: http://aragon-online.net/aoclient/publish.phpASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary CMS Alpha 2: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 3.5. ActiveRecord based support for Blogs, Widgets, Pages, Parts, Ev...BFBC2 PRoCon: PRoCon 0.5.1.8: It's not even funny anymore =\Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.3: Second release of a Static LLBLGen Pro Data Context Driver for LINQPad For LLBLGen Pro versions 2.6 and 3.0 beta. Fixed 'connection string not ini...Community Forums NNTP bridge: Community Forums NNTP Bridge V01: This is the first release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge.Community Forums NNTP bridge: Community Forums NNTP Bridge V02: This is the second release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open sourcen NNTP Bridge. ...DDDSample.Net: 0.9: Release 0.9 contains two major improvements: Vanilla version (both Synch and Asynch) has been updated so its model more closely resembles Java orig...DEWD: DEWD for Umbraco: Alpha release of the package. Usable for simple SQL editing, but lacking some core features such as validation, user friendly error handling, confi...Dojo Timer: Dojo Timer v1: Primeira versão Dojo Timer.eXpress Persistent Objects (XPO) Toolkit: Samples: Video Channel Channel.zip sample shows how to build a video site using XPO and WCF Data Services. DevExpress Channel DevExpress Channel Browse ...F# Project Extender: V0.9.2.1 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. Fixed bugs: -Project extender 0.9.2.0 can't be loaded in VS2008 without SDKFeedback Form: Feedback Application: Installer of the projectFeedback Form: Feedback Form: .sln for Feedback Form ApplicationGameFX - The Game Development Framework: Version 1.0 (Beta): Project is Visual Studio 2008 solution. GameFX Source code and sample program. The sample program allows you to create maps of any size, and drop ...MarkLogic Sample Authoring App for Word: MarkLogic Sample Authoring App for Word 1.0-1: Initial release of the MarkLogic Sample Authoring App for Word. See the home page for an overview on functionality. Within the release you'll ...MarkLogic Toolkit for Word: MarkLogic Toolkit for Word 1.2-1: Release built in support of the MarkLogic Sample Authoring App for Word. Updates include: update to XQuery API to expose functions for working w...Microsoft SQL Server Community & Samples: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release contains sample code for Microsoft SQL Server 2008R2. For many of these samples you will also need...Microsoft SQL Server Product Samples: Analysis Services: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Data Programming: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 RTM: Sample Databases for Microsoft SQL Server 2008R2 (RTM)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. ...Microsoft SQL Server Product Samples: End to End: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Engine: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Integration Services: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Replication: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Reporting Services: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Scripts: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: Service Broker: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...Microsoft SQL Server Product Samples: XML: SQL Server 2008R2 RTM: Microsoft SQL Server 2008R2 (RTM) This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples y...NLog - Advanced .NET Logging: Nightly Build 2010.05.25.001: Changes since the last build:2010-05-24 23:08:47 Jarek Kowalski Fixed base constructor invocation to ensure consistency. Added tests for common wra...NTFS parser classes: NTFS parser lib 0.55: 0.55openrs: Revision 3: Things that have been added since last release: Vector expanding Dynamic vectors Vector put method chaining Basic ISAAC implementation Wor...Page Action Web Part for SharePoint 2007: Page Action Web Part v1.0.0.0: First release of the Page Action Web Part v1.0.0.0.Productivity Hub Solutions: Silverlight Bookshelf: The Silverlight Bookshelf component of the 2010 Productivity Hub provides 4 accordion-style vertical tabs dispalying Featured Video, Featured Conte...Productivity Hub Solutions: Silverlight Product Carousel: The Product Carousel Silverlight component provides a rich navigation experience to the home page of the 2010 Productivity Hub - presenting the pro...Rawr: Rawr 2.3.18: >Rawr3 Public Beta has been released! Click here for details.< - Fix for bug in parsing characters with certain abnormal characters in their data. ...Runtime Intelligence Data Visualizer: RI Data Visualizer Release 1: This release of the RI Data Visualizer contains both a WPF client that displays application usage data and a Silverlight client that displays featu...sGSHOPedit: sGSHOPedit v1.1a: Fixed: bug in parsing description from "itemextdesc.txt" Fixed: surface change event Fixed: range for numeric values Added: search featureSLManagers: SlManagers: 实现简单的组件动态下载 使用Mef技术Sudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.1.0 program: Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The basic idea of algorithm is from http://www.ac...Sudoku (Multiplayer in RnD): Sudoku (Multiplayer in RnD) 1.0.1.0 source: user-interface, multi-threading, formatting Sudoku project was to practice on C# by making a desktop application using some algorithm Idea: The...Tauck Public API: XML Package 1.0: Current Release of XML dataTeach.Net: Teach.Net 1.0 Alpha: First alpha version. It should work, but there's gonna be bugs. Also, no intellisense documentation (or any other sort of documentation) yet. I'm w...VCC: Latest build, v2.1.30525.0: Automatic drop of latest buildVista Media Center TCP/IP Controller: Win7 64 and 32 bit Alpha - button command fix: button command fix , button-play, button-pause, button-skip back, button-skip fwd. Confirmed works on x64. Has not been tested on x32XsltDb - DotNetNuke Module Universal Building Block: 01.01.21: ASP.NET controls TreeView and TextEditor usage Live demo site Attention This release requires DNN 5.2 or higher as it using Telerik classes.in...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise Librarypatterns & practices: Windows Azure Security GuidanceRawrSqlServerExtensionsMono.AddinsBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationCodeReviewCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • SQL Server &ndash; Undelete a Table and Restore a Single Table from Backup

    - by Mladen Prajdic
    This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic (blog|twitter) and hosted by someone else each month. This month the host is Sankar Reddy (blog|twitter) and the topic is Misconceptions in SQL Server. You can follow posts for this theme on Twitter by looking at #TSQL2sDay hashtag. Let me start by saying: This code is a crazy hack that is to never be used unless you really, really have to. Really! And I don’t think there’s a time when you would really have to use it for real. Because it’s a hack there are number of things that can go wrong so play with it knowing that. I’ve managed to totally corrupt one database. :) Oh… and for those saying: yeah yeah.. you have a single table in a file group and you’re restoring that, I say “nay nay” to you. As we all know SQL Server can’t do single table restores from backup. This is kind of a obvious thing due to different relational integrity (RI) concerns. Since we have to maintain that we have to restore all tables represented in a RI graph. For this exercise i say BAH! to those concerns. Note that this method “works” only for simple tables that don’t have LOB and off rows data. The code can be expanded to include those but I’ve tried to leave things “simple”. Note that for this to work our table needs to be relatively static data-wise. This doesn’t work for OLTP table. Products are a perfect example of static data. They don’t change much between backups, pretty much everything depends on them and their table is one of those tables that are relatively easy to accidentally delete everything from. This only works if the database is in Full or Bulk-Logged recovery mode for tables where the contents have been deleted or truncated but NOT when a table was dropped. Everything we’ll talk about has to be done before the data pages are reused for other purposes. After deletion or truncation the pages are marked as reusable so you have to act fast. The best thing probably is to put the database into single user mode ASAP while you’re performing this procedure and return it to multi user after you’re done. How do we do it? We will be using an undocumented but known DBCC commands: DBCC PAGE, an undocumented function sys.fn_dblog and a little known DATABASE RESTORE PAGE option. All tests will be on a copy of Production.Product table in AdventureWorks database called Production.Product1 because the original table has FK constraints that prevent us from truncating it for testing. -- create a duplicate table. This doesn't preserve indexes!SELECT *INTO AdventureWorks.Production.Product1FROM AdventureWorks.Production.Product   After we run this code take a full back to perform further testing.   First let’s see what the difference between DELETE and TRUNCATE is when it comes to logging. With DELETE every row deletion is logged in the transaction log. With TRUNCATE only whole data page deallocations are logged in the transaction log. Getting deleted data pages is simple. All we have to look for is row delete entry in the sys.fn_dblog output. But getting data pages that were truncated from the transaction log presents a bit of an interesting problem. I will not go into depths of IAM(Index Allocation Map) and PFS (Page Free Space) pages but suffice to say that every IAM page has intervals that tell us which data pages are allocated for a table and which aren’t. If we deep dive into the sys.fn_dblog output we can see that once you truncate a table all the pages in all the intervals are deallocated and this is shown in the PFS page transaction log entry as deallocation of pages. For every 8 pages in the same extent there is one PFS page row in the transaction log. This row holds information about all 8 pages in CSV format which means we can get to this data with some parsing. A great help for parsing this stuff is Peter Debetta’s handy function dbo.HexStrToVarBin that converts hexadecimal string into a varbinary value that can be easily converted to integer tus giving us a readable page number. The shortened (columns removed) sys.fn_dblog output for a PFS page with CSV data for 1 extent (8 data pages) looks like this: -- [Page ID] is displayed in hex format. -- To convert it to readable int we'll use dbo.HexStrToVarBin function found at -- http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx -- This function must be installed in the master databaseSELECT Context, AllocUnitName, [Page ID], DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE [Current LSN] = '00000031:00000a46:007d' The pages at the end marked with 0x00—> are pages that are allocated in the extent but are not part of a table. We can inspect the raw content of each data page with a DBCC PAGE command: -- we need this trace flag to redirect output to the query window.DBCC TRACEON (3604); -- WITH TABLERESULTS gives us data in table format instead of message format-- we use format option 3 because it's the easiest to read and manipulate further onDBCC PAGE (AdventureWorks, 1, 613, 3) WITH TABLERESULTS   Since the DBACC PAGE output can be quite extensive I won’t put it here. You can see an example of it in the link at the beginning of this section. Getting deleted data back When we run a delete statement every row to be deleted is marked as a ghost record. A background process periodically cleans up those rows. A huge misconception is that the data is actually removed. It’s not. Only the pointers to the rows are removed while the data itself is still on the data page. We just can’t access it with normal means. To get those pointers back we need to restore every deleted page using the RESTORE PAGE option mentioned above. This restore must be done from a full backup, followed by any differential and log backups that you may have. This is necessary to bring the pages up to the same point in time as the rest of the data.  However the restore doesn’t magically connect the restored page back to the original table. It simply replaces the current page with the one from the backup. After the restore we use the DBCC PAGE to read data directly from all data pages and insert that data into a temporary table. To finish the RESTORE PAGE  procedure we finally have to take a tail log backup (simple backup of the transaction log) and restore it back. We can now insert data from the temporary table to our original table by hand. Getting truncated data back When we run a truncate the truncated data pages aren’t touched at all. Even the pointers to rows stay unchanged. Because of this getting data back from truncated table is simple. we just have to find out which pages belonged to our table and use DBCC PAGE to read data off of them. No restore is necessary. Turns out that the problems we had with finding the data pages is alleviated by not having to do a RESTORE PAGE procedure. Stop stalling… show me The Code! This is the code for getting back deleted and truncated data back. It’s commented in all the right places so don’t be afraid to take a closer look. Make sure you have a full backup before trying this out. Also I suggest that the last step of backing and restoring the tail log is performed by hand. USE masterGOIF OBJECT_ID('dbo.HexStrToVarBin') IS NULL RAISERROR ('No dbo.HexStrToVarBin installed. Go to http://sqlblog.com/blogs/peter_debetta/archive/2007/03/09/t-sql-convert-hex-string-to-varbinary.aspx and install it in master database' , 18, 1) SET NOCOUNT ONBEGIN TRY DECLARE @dbName VARCHAR(1000), @schemaName VARCHAR(1000), @tableName VARCHAR(1000), @fullBackupName VARCHAR(1000), @undeletedTableName VARCHAR(1000), @sql VARCHAR(MAX), @tableWasTruncated bit; /* THE FIRST LINE ARE OUR INPUT PARAMETERS In this case we're trying to recover Production.Product1 table in AdventureWorks database. My full backup of AdventureWorks database is at e:\AW.bak */ SELECT @dbName = 'AdventureWorks', @schemaName = 'Production', @tableName = 'Product1', @fullBackupName = 'e:\AW.bak', @undeletedTableName = '##' + @tableName + '_Undeleted', @tableWasTruncated = 0, -- copy the structure from original table to a temp table that we'll fill with restored data @sql = 'IF OBJECT_ID(''tempdb..' + @undeletedTableName + ''') IS NOT NULL DROP TABLE ' + @undeletedTableName + ' SELECT *' + ' INTO ' + @undeletedTableName + ' FROM [' + @dbName + '].[' + @schemaName + '].[' + @tableName + ']' + ' WHERE 1 = 0' EXEC (@sql) IF OBJECT_ID('tempdb..#PagesToRestore') IS NOT NULL DROP TABLE #PagesToRestore /* FIND DATA PAGES WE NEED TO RESTORE*/ CREATE TABLE #PagesToRestore ([ID] INT IDENTITY(1,1), [FileID] INT, [PageID] INT, [SQLtoExec] VARCHAR(1000)) -- DBCC PACE statement to run later RAISERROR ('Looking for deleted pages...', 10, 1) -- use T-LOG direct read to get deleted data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) EXEC('USE [' + @dbName + '];SELECT FileID, PageID, ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), ' + 'CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageIDFROM sys.fn_dblog(NULL, NULL)WHERE AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'' ' + 'AND Context IN (''LCX_MARK_AS_GHOST'', ''LCX_HEAP'') AND Operation in (''LOP_DELETE_ROWS''))t');SELECT *FROM #PagesToRestore -- if upper EXEC returns 0 rows it means the table was truncated so find truncated pages IF (SELECT COUNT(*) FROM #PagesToRestore) = 0 BEGIN RAISERROR ('No deleted pages found. Looking for truncated pages...', 10, 1) -- use T-LOG read to get truncated data pages INSERT INTO #PagesToRestore([FileID], [PageID], [SQLtoExec]) -- dark magic happens here -- because truncation simply deallocates pages we have to find out which pages were deallocated. -- we can find this out by looking at the PFS page row's Description column. -- for every deallocated extent the Description has a CSV of 8 pages in that extent. -- then it's just a matter of parsing it. -- we also remove the pages in the extent that weren't allocated to the table itself -- marked with '0x00-->00' EXEC ('USE [' + @dbName + '];DECLARE @truncatedPages TABLE(DeallocatedPages VARCHAR(8000), IsMultipleDeallocs BIT);INSERT INTO @truncatedPagesSELECT REPLACE(REPLACE(Description, ''Deallocated '', ''Y''), ''0x00-->00 '', ''N'') + '';'' AS DeallocatedPages, CHARINDEX('';'', Description) AS IsMultipleDeallocsFROM (SELECT DISTINCT LEFT([Page ID], 4) AS FileID, CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING([Page ID], 6, 20)))) AS PageID, DescriptionFROM sys.fn_dblog(NULL, NULL)WHERE Context IN (''LCX_PFS'') AND Description LIKE ''Deallocated%'' AND AllocUnitName LIKE ''%' + @schemaName + '.' + @tableName + '%'') t;SELECT FileID, PageID , ''DBCC TRACEON (3604); DBCC PAGE ([' + @dbName + '], '' + FileID + '', '' + PageID + '', 3) WITH TABLERESULTS'' as SQLToExecFROM (SELECT LEFT(PageAndFile, 1) as WasPageAllocatedToTable , SUBSTRING(PageAndFile, 2, CHARINDEX('':'', PageAndFile) - 2 ) as FileID , CONVERT(VARCHAR(100), CONVERT(INT, master.dbo.HexStrToVarBin(SUBSTRING(PageAndFile, CHARINDEX('':'', PageAndFile) + 1, LEN(PageAndFile))))) as PageIDFROM ( SELECT SUBSTRING(DeallocatedPages, delimPosStart, delimPosEnd - delimPosStart) as PageAndFile, IsMultipleDeallocs FROM ( SELECT *, CHARINDEX('';'', DeallocatedPages)*(N-1) + 1 AS delimPosStart, CHARINDEX('';'', DeallocatedPages)*N AS delimPosEnd FROM @truncatedPages t1 CROSS APPLY (SELECT TOP (case when t1.IsMultipleDeallocs = 1 then 8 else 1 end) ROW_NUMBER() OVER(ORDER BY number) as N FROM master..spt_values) t2 )t)t)tWHERE WasPageAllocatedToTable = ''Y''') SELECT @tableWasTruncated = 1 END DECLARE @lastID INT, @pagesCount INT SELECT @lastID = 1, @pagesCount = COUNT(*) FROM #PagesToRestore SELECT @sql = 'Number of pages to restore: ' + CONVERT(VARCHAR(10), @pagesCount) IF @pagesCount = 0 RAISERROR ('No data pages to restore.', 18, 1) ELSE RAISERROR (@sql, 10, 1) -- If the table was truncated we'll read the data directly from data pages without restoring from backup IF @tableWasTruncated = 0 BEGIN -- RESTORE DATA PAGES FROM FULL BACKUP IN BATCHES OF 200 WHILE @lastID <= @pagesCount BEGIN -- create CSV string of pages to restore SELECT @sql = STUFF((SELECT ',' + CONVERT(VARCHAR(100), FileID) + ':' + CONVERT(VARCHAR(100), PageID) FROM #PagesToRestore WHERE ID BETWEEN @lastID AND @lastID + 200 ORDER BY ID FOR XML PATH('')), 1, 1, '') SELECT @sql = 'RESTORE DATABASE [' + @dbName + '] PAGE = ''' + @sql + ''' FROM DISK = ''' + @fullBackupName + '''' RAISERROR ('Starting RESTORE command:' , 10, 1) WITH NOWAIT; RAISERROR (@sql , 10, 1) WITH NOWAIT; EXEC(@sql); RAISERROR ('Restore DONE' , 10, 1) WITH NOWAIT; SELECT @lastID = @lastID + 200 END /* If you have any differential or transaction log backups you should restore them here to bring the previously restored data pages up to date */ END DECLARE @dbccSinglePage TABLE ( [ParentObject] NVARCHAR(500), [Object] NVARCHAR(500), [Field] NVARCHAR(500), [VALUE] NVARCHAR(MAX) ) DECLARE @cols NVARCHAR(MAX), @paramDefinition NVARCHAR(500), @SQLtoExec VARCHAR(1000), @FileID VARCHAR(100), @PageID VARCHAR(100), @i INT = 1 -- Get deleted table columns from information_schema view -- Need sp_executeSQL because database name can't be passed in as variable SELECT @cols = 'select @cols = STUFF((SELECT '', ['' + COLUMN_NAME + '']''FROM ' + @dbName + '.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME = ''' + @tableName + ''' AND TABLE_SCHEMA = ''' + @schemaName + '''ORDER BY ORDINAL_POSITIONFOR XML PATH('''')), 1, 2, '''')', @paramDefinition = N'@cols nvarchar(max) OUTPUT' EXECUTE sp_executesql @cols, @paramDefinition, @cols = @cols OUTPUT -- Loop through all the restored data pages, -- read data from them and insert them into temp table -- which you can then insert into the orignial deleted table DECLARE dbccPageCursor CURSOR GLOBAL FORWARD_ONLY FOR SELECT [FileID], [PageID], [SQLtoExec] FROM #PagesToRestore ORDER BY [FileID], [PageID] OPEN dbccPageCursor; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; WHILE @@FETCH_STATUS = 0 BEGIN RAISERROR ('---------------------------------------------', 10, 1) WITH NOWAIT; SELECT @sql = 'Loop iteration: ' + CONVERT(VARCHAR(10), @i); RAISERROR (@sql, 10, 1) WITH NOWAIT; SELECT @sql = 'Running: ' + @SQLtoExec RAISERROR (@sql, 10, 1) WITH NOWAIT; -- if something goes wrong with DBCC execution or data gathering, skip it but print error BEGIN TRY INSERT INTO @dbccSinglePage EXEC (@SQLtoExec) -- make the data insert magic happen here IF (SELECT CONVERT(BIGINT, [VALUE]) FROM @dbccSinglePage WHERE [Field] LIKE '%Metadata: ObjectId%') = OBJECT_ID('['+@dbName+'].['+@schemaName +'].['+@tableName+']') BEGIN DELETE @dbccSinglePage WHERE NOT ([ParentObject] LIKE 'Slot % Offset %' AND [Object] LIKE 'Slot % Column %') SELECT @sql = 'USE tempdb; ' + 'IF (OBJECTPROPERTY(object_id(''' + @undeletedTableName + '''), ''TableHasIdentity'') = 1) ' + 'SET IDENTITY_INSERT ' + @undeletedTableName + ' ON; ' + 'INSERT INTO ' + @undeletedTableName + '(' + @cols + ') ' + STUFF((SELECT ' UNION ALL SELECT ' + STUFF((SELECT ', ' + CASE WHEN VALUE = '[NULL]' THEN 'NULL' ELSE '''' + [VALUE] + '''' END FROM ( -- the unicorn help here to correctly set ordinal numbers of columns in a data page -- it's turning STRING order into INT order (1,10,11,2,21 into 1,2,..10,11...21) SELECT [ParentObject], [Object], Field, VALUE, RIGHT('00000' + O1, 6) AS ParentObjectOrder, RIGHT('00000' + REVERSE(LEFT(O2, CHARINDEX(' ', O2)-1)), 6) AS ObjectOrder FROM ( SELECT [ParentObject], [Object], Field, VALUE, REPLACE(LEFT([ParentObject], CHARINDEX('Offset', [ParentObject])-1), 'Slot ', '') AS O1, REVERSE(LEFT([Object], CHARINDEX('Offset ', [Object])-2)) AS O2 FROM @dbccSinglePage WHERE t.ParentObject = ParentObject )t)t ORDER BY ParentObjectOrder, ObjectOrder FOR XML PATH('')), 1, 2, '') FROM @dbccSinglePage t GROUP BY ParentObject FOR XML PATH('') ), 1, 11, '') + ';' RAISERROR (@sql, 10, 1) WITH NOWAIT; EXEC (@sql) END END TRY BEGIN CATCH SELECT @sql = 'ERROR!!!' + CHAR(10) + CHAR(13) + 'ErrorNumber: ' + ERROR_NUMBER() + '; ErrorMessage' + ERROR_MESSAGE() + CHAR(10) + CHAR(13) + 'FileID: ' + @FileID + '; PageID: ' + @PageID RAISERROR (@sql, 10, 1) WITH NOWAIT; END CATCH DELETE @dbccSinglePage SELECT @sql = 'Pages left to process: ' + CONVERT(VARCHAR(10), @pagesCount - @i) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13) + CHAR(10) + CHAR(13), @i = @i+1 RAISERROR (@sql, 10, 1) WITH NOWAIT; FETCH NEXT FROM dbccPageCursor INTO @FileID, @PageID, @SQLtoExec; END CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; EXEC ('SELECT ''' + @undeletedTableName + ''' as TableName; SELECT * FROM ' + @undeletedTableName)END TRYBEGIN CATCH SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() AS ErrorMessage IF CURSOR_STATUS ('global', 'dbccPageCursor') >= 0 BEGIN CLOSE dbccPageCursor; DEALLOCATE dbccPageCursor; ENDEND CATCH-- if the table was deleted we need to finish the restore page sequenceIF @tableWasTruncated = 0BEGIN -- take a log tail backup and then restore it to complete page restore process DECLARE @currentDate VARCHAR(30) SELECT @currentDate = CONVERT(VARCHAR(30), GETDATE(), 112) RAISERROR ('Starting Log Tail backup to c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('BACKUP LOG [' + @dbName + '] TO DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail backup done.', 10, 1) WITH NOWAIT; RAISERROR ('Starting Log Tail restore from c:\Temp ...', 10, 1) WITH NOWAIT; PRINT ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') EXEC ('RESTORE LOG [' + @dbName + '] FROM DISK = ''c:\Temp\' + @dbName + '_TailLogBackup_' + @currentDate + '.trn''') RAISERROR ('Log Tail restore done.', 10, 1) WITH NOWAIT;END-- The last step is manual. Insert data from our temporary table to the original deleted table The misconception here is that you can do a single table restore properly in SQL Server. You can't. But with little experimentation you can get pretty close to it. One way to possible remove a dependency on a backup to retrieve deleted pages is to quickly run a similar script to the upper one that gets data directly from data pages while the rows are still marked as ghost records. It could be done if we could beat the ghost record cleanup task.

    Read the article

  • Big smart ViewModels, dumb Views, and any model, the best MVVM approach?

    - by Edward Tanguay
    The following code is a refactoring of my previous MVVM approach (Fat Models, skinny ViewModels and dumb Views, the best MVVM approach?) in which I moved the logic and INotifyPropertyChanged implementation from the model back up into the ViewModel. This makes more sense, since as was pointed out, you often you have to use models that you either can't change or don't want to change and so your MVVM approach should be able to work with any model class as it happens to exist. This example still allows you to view the live data from your model in design mode in Visual Studio and Expression Blend which I think is significant since you could have a mock data store that the designer connects to which has e.g. the smallest and largest strings that the UI can possibly encounter so that he can adjust the design based on those extremes. Questions: I'm a bit surprised that I even have to "put a timer" in my ViewModel since it seems like that is a function of INotifyPropertyChanged, it seems redundant, but it was the only way I could get the XAML UI to constantly (once per second) reflect the state of my model. So it would be interesting to hear anyone who may have taken this approach if you encountered any disadvantages down the road, e.g. with threading or performance. The following code will work if you just copy the XAML and code behind into a new WPF project. XAML: <Window x:Class="TestMvvm73892.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:TestMvvm73892" Title="Window1" Height="300" Width="300"> <Window.Resources> <ObjectDataProvider x:Key="DataSourceCustomer" ObjectType="{x:Type local:CustomerViewModel}" MethodName="GetCustomerViewModel"/> </Window.Resources> <DockPanel DataContext="{StaticResource DataSourceCustomer}"> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=FirstName}"/> <TextBlock Text=" "/> <TextBlock Text="{Binding Path=LastName}"/> </StackPanel> <StackPanel DockPanel.Dock="Top" Orientation="Horizontal"> <TextBlock Text="{Binding Path=TimeOfMostRecentActivity}"/> </StackPanel> </DockPanel> </Window> Code Behind: using System; using System.Windows; using System.ComponentModel; using System.Threading; namespace TestMvvm73892 { public partial class Window1 : Window { public Window1() { InitializeComponent(); } } //view model public class CustomerViewModel : INotifyPropertyChanged { private string _firstName; private string _lastName; private DateTime _timeOfMostRecentActivity; private Timer _timer; public string FirstName { get { return _firstName; } set { _firstName = value; this.RaisePropertyChanged("FirstName"); } } public string LastName { get { return _lastName; } set { _lastName = value; this.RaisePropertyChanged("LastName"); } } public DateTime TimeOfMostRecentActivity { get { return _timeOfMostRecentActivity; } set { _timeOfMostRecentActivity = value; this.RaisePropertyChanged("TimeOfMostRecentActivity"); } } public CustomerViewModel() { _timer = new Timer(CheckForChangesInModel, null, 0, 1000); } private void CheckForChangesInModel(object state) { Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, this); } public static CustomerViewModel GetCustomerViewModel() { CustomerViewModel customerViewModel = new CustomerViewModel(); Customer currentCustomer = CustomerViewModel.GetCurrentCustomer(); MapFieldsFromModeltoViewModel(currentCustomer, customerViewModel); return customerViewModel; } public static void MapFieldsFromModeltoViewModel(Customer model, CustomerViewModel viewModel) { viewModel.FirstName = model.FirstName; viewModel.LastName = model.LastName; viewModel.TimeOfMostRecentActivity = model.TimeOfMostRecentActivity; } public static Customer GetCurrentCustomer() { return Customer.GetCurrentCustomer(); } //INotifyPropertyChanged implementation public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } } //model public class Customer { public string FirstName { get; set; } public string LastName { get; set; } public DateTime TimeOfMostRecentActivity { get; set; } public static Customer GetCurrentCustomer() { return new Customer { FirstName = "Jim", LastName = "Smith", TimeOfMostRecentActivity = DateTime.Now }; } } }

    Read the article

  • CodePlex Daily Summary for Tuesday, January 04, 2011

    CodePlex Daily Summary for Tuesday, January 04, 2011Popular ReleasesMini Memory Dump Diagnosis using Asp.Net: MiniDump HealthMonitor: Enable developers to generate mini memory dumps in case of unhandled exceptions or for specific exception scenarios without using any external tools , only using Asp.Net 2.0 and above. Many times in production , QA Servers the issues require post-mortem or low-level system debugging efforts. This Memory dump generator will help in those scenarios.EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14977.000: Prerequisites: ============== o Visual Studio 2008 / Visual Studio 2010 o ReSharper 5.1.1753.4 o StyleCop 4.4.1.2 Preview This release adds no new features, has bug fixes around performance and unhandled errors reported on YouTrack.BloodSim: BloodSim - 1.3.1.0: - Restructured simulation log back end to something less stupid to drastically reduce simulation time and memory usage - Removed a debug log entry that was left over from testing of 1.3.0.0 - Fixed a rounding and calculation error with Haste rating - Added option for Rune of SwordshatteringDbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide Kind Of Magic MSBuild Task: Beta 4: Update to keep up with latest bug fixes. To those who don't like Magic/NoMagic attributes, you may change these names in KindOfMagic.targets file: Change this line: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)"/> to something like this: <MagicTask Assembly="@(IntermediateAssembly)" References="@(ReferencePath)" MagicAttribute="MyMagicAttribute" NoMagicAttribute="MyNoMagicAttribute"/>N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!New Projects3D SharePoint Tag Cloud in Silverlight: This is a Silverlight Tag Cloud intended for use in SharePoint but can be configured for use separately as well. Based on a blog post from Peter Gerritsen (http://blog.petergerritsen.nl/2009/02/14/creating-a-3d-tagcloud-in-silverlight-part-1/).AffinityUI for Unity 3D: AffinityUI makes writing GUI code for Unity 3D easier with a component based API that allows you to write GUIs in a declarative fashion using a fluent interface, update events and powerful two-way databinding. A visual editor and scene GUI support are planned for the future.calendar synchronization: This will enable tight integration between ivvy core and infragistics scheduler. Will utilize oData and native isolated strorage instead of SQLite as planed earlier.Card Games Library: The aim of this project is to offer a framework for creating card games logics and rules. This project is not focused on particular games but offers a felxible and extensible base for creating card games, like; poker, solitaire, rummy, etcDbDocument: DbDoc tool seeks to be helper tool side by side with MS SQL server management studio tool, you can design your DB Tables in visualized way through Diagrams and then use “DbDoc” tool to generate design document in MS Word table formatDelux: Delux project for educationDnEcuDiag - A .NET Ecu Diagnostic Application: DnEcuDiag is an application framework for developing electronic control unit diagnostic functionality without the need to implement code.Dynamics CRM 4.0 Recycle Bin: This add-on for Microsoft Dynamics CRM 4.0 to enable Recycle Bin Features (Restore, Permenant Delete) for CRM users.EPPLib.it: Il sistema sincrono di registrazione e mantenimento dei nomi a dominio del Registro.it utilizza il protocollo EPP (Extensible Provisioning Protocol). EPPLib consente di interfacciare i propri progetti con il sistema sincrono del Registro.it.IoC4Fun: Very LightWeight IoC or DI Container only required Net Framework 3.5 written in C#. For Small Apps or Educational purpose. Not intrusion code added like others Containers. Just Plan Register Dependencies and Let Container resolve magically the Injection Dependencies.ISocial: WP7 Application, centralise les réseaux sociauxJohnnyIssa: begASP.NET Version 4.LiveCPE: LiveCPE is an Academic project based on C# and ASP.Net. It aims to be a social network website. MS CRM - Skype Connector: Skype Addon for Microsoft Dynamics CRM allows to dial CRM Contacts, Lead and so on via Skype. It's developed in ASP.NET and C#.Ryan's Projects: a group of random c# projectsSeleniuMspec: SeleniuMspec is a template for Selenium IDE that generates Selenium tests for Mspec (a .NET BDD framework). SharePoint FAQ Web Part: The SharePoint FAQ Web Part makes it easier for SharePoint Users to display FAQs feature on their SharePoint sites. You'll no longer have to maintain HTML and anchors in the Content Editor Web Part, as the FAQs are now automatically generated from a SharePoint list. Silverlight Planet: Projetos da comunidade Silverlight Planet www.silverlightplanet.net.brWP7 Expense Report: Expense Report for Windows Phone 7xll - the easy way to create Excel add-ins: The xll C++ library makes it easy to create xll add-ins for Excel. It also supports the big grid, wide-character strings, and thread-safe functions. ????????: ????,??,??,????,????,??????

    Read the article

  • Why won't my DIV height grow? Is it because I am using floats?

    - by user1684636
    Please help! I am trying to create some div sections and some of these div sections have some other divs that are floated. All of which I have cleared. But the sections are not growing to accomodate the content inside of them. The following is my HTML - <div class="data"> <div class="name">Data</div> <div class="first section"> <div class="title">First Section</div> <div class="left settings"> <div class="row"> <div class="field">First Name</div> <div class="value">John</div> </div> <div class="row"> <div class="field">Last Name</div> <div class="value">Smith</div> </div> </div> <div class="right settings"> <div class="row"> <div class="field">ID</div> <div class="value">321</div> </div> <div class="row"> <div class="field">Group</div> <div class="value">Eng</div> </div> </div> </div> <div class="second section"> <div class="title">Second Section</div> </div> <div class="third section"> <div class="title">Third Section</div> </div> </div> The following is my CSS - div.data { position: relative; top: 1em; width: 100em; } div.data div.name { color: #0066FF; font-weight: bold; } div.data div.section div.title { color: green; font-weight: bold; } /** first section **/ div.data div.first div.settings { position: relative; width: 799px; border-top: 1px solid; float: left; } div.data div.first div.left { border-left: 1px solid; } div.data div.first div.settings div.row { border-bottom: 1px solid; clear: both; height: 2em; } div.data div.first div.settings div.row div { float: left; height: 22px; padding: 5px; border-right: 1px solid; } div.data div.first div.settings div.row div.field { width: 192px; } div.data div.first div.settings div.row div.value { width: 585px; } /** second section **/ div.data div.second { clear: both; } For the first section, I have a title and a table that is made up of a left and right. Both of these are floated left and cleared. The rows of the left and right table have a field and value which are also floated left and right and cleared. At this point, I expect "first section" to grow as the content inside of it grows. But instead the height is stuck at 22px, the same height as the section title! What is going on? I tried overflow:auto for div.section and that worked. But when I tried to set this style for div.settings in "first section": div.data div.first div.settings { position: relative; top: 1em; } I end up with these scroll bars, instead of the height growing to fit the new changes. And everything was out of whack. I am really at my wit's end trying to figure this out. If anyone can give me any suggestions on what I am doing wrong, it would be a great help. Thanks.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 20, 2012

    CodePlex Daily Summary for Tuesday, March 20, 2012Popular ReleasesNearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...JavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager (1.2.1420.191): BUG FIXED : When loading scripts from disk, the import of the web resource didn't do anything When scripts were saved to disk, it wasn't possible to edit them with an editorSQL Monitor - managing sql server performance: SQLMon 4.2 alpha 12: 1. improved process visualizer, now shows how many dead locks, and what are the locked objects 2. fixed some other problems.Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionDaun Management Studio: Daun Management Studio 0.1 (Alpha Version): These are these the alpha application packages for Daun Management Studio to manage MongoDB Server. Please visit our official website http://www.daun-project.comSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.28: changes NEW: Added Support for "PixHub.eu" linksSmartNet: V1.0.0.0: DY SmartNet ?????? V1.0callisto: callisto 2.0.21: Added an option to disable local host detection.Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.6: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valueGoogle Books Downloader for Windows: Google Books Downloader-2.0.0.0.: Google Books DownloaderFinestra Virtual Desktops: 2.5.4501: This is a very minor update release. Please see the information about the 2.5 and 2.5.4500 releases for more information on recent changes. This update did not even have an automatic update triggered for it. Adds error checking and reporting to all threads, not only those with message loopsAcDown????? - Anime&Comic Downloader: AcDown????? v3.9.2: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Release Candidate: Your feedback is welcome - and this is your last chance to get your fixes in for this version! Includes installer for both Feature Server extension and Desktop extension, enhanced functionality for the Desktop tools, and enhanced built-in Javascript Editor for the Feature Server component. This release candidate includes fixes to beta 4 that accommodate domain users for setting up the Server Component, and fixes for reporting/uploading references tracked in the revision table. See Code In-P...C.B.R. : Comic Book Reader: CBR 0.6: 20 Issue trackers are closed and a lot of bugs too Localize view is now MVVM and delete is working. Added the unused flag (take care that it goes to true only when displaying screen elements) Backstage - new input/output format choice control for the conversion Backstage - Add display, behaviour and register file type options in the extended options dialog Explorer list view has been transformed to a custom control. New group header, colunms order and size are saved Single insta...New Projects{3S} SQL Smart Security "Protect your T-SQL know-how!": {3S} SQL Smart Security is an add-in which can be installed in Microsoft SQL Server Management Studio (SSMS). It enables software companies to create a secured content for database objects. The add-in brings much higher level of protection in comparison with SQL Server built in WITH ENCRYPTION feature.BETA - Content Slider for SharePoint 2010 / Office 365: SharePoint Banner / SharePoint 2010 Sliding Banner / Content Slider tools in Office 365/ Sliding Content in SharePoint is a general tool which could be used for sliding Banners or any other sliding content to be placed on any Office 365 / SharePoint 2010 / SharePoint Foundation.BF3 Development Server: The main issue of this project is to deliver a test server to all developers working on RCon (Remote-Administration-Console) Tools for Battlefield 3. Actually the only possibility to test the work made is to hire a real Game Server. BizTalk Server 2010 TCP/IP Adapter: This project is migration of existing BizTalk server 2009 TCPIP adapter to BizTalk server 2010. I have made few configuration changes which are making this adapter and installation compatible to BizTalk Server 2010. I have not modified adapter source code.BryhtCommon: It`s for easy to develope WP7 ,it contains some useful method Bug.Net Defect Tracking Components: Bug.Net server-side controls and components to add defect (bug) tracking to your current ASP.NET website.Customize Survey With Client Object Model: Customize OOB Survey/Vote/Poll in Share?Point With Client Object Model Visit http://swatipoint.blogspot.in/2011/12/sharepoint-client-object-modellist.html for more detailsDropboxToDo: A simple todo, synchronizing by Dropbox or, in future, by SkydriveFoggy: Foggy is a WPF dashboard for FogBugz information.Fort Myers High School Website: A website for Fort Myers High School in Fort Myers, Florida. This website will allow for both students and parents to better interact with the school. Developed in ASP.net (C#).Gonte.DataAccess: Data access for NET. It's developed in C#.Gonte.ObjectModel: Metadata about objects. It's developed in C#Gonte.SqlGenerator: Sql Generator It's developed in C#.kLib: This project space is for datastructures and classes, which should always be available. Any developer should use these in their projects. Liuyi.Phone.CharmScreen: Liuyi.Phone.CharmScreen Liuyi windows phone appLoU: Lord of Ultima helper suite.Managing Supplies: This WP7 project is able to manage your own suppliesNAV Fixed Assets 2012: Changes related to 'Dossier Fiscal' in Microsoft Dynamics NAV 2009 - New Model 30 - Changes to model 31 and model 32NCAA Tourney DotNetNuke Module: The NCAA Tourney is a DotNetNuke 3.X - 6.X module that allows you to add a NCAA tournament to your portal. You can allow users to record their picks for the tournament and then manage the outcome of the tournament calculating the winner of your tourney based on customizable point system. The module has been designed to be very user friendly and efficient for the end user as well as the administrator of the tournament.nothing here anymore: nothing here anymoreOrchard Dream Store Project: A simple website using Orchard CMS. For a school projectPowerShell Management Library for TEM: A project to provide a PowerShell functionality for managing your Tivoli Endpoint Manager (built upon BigFix technology). You can locally or remotely manage endpoints and relays via these simple and easy to use PowerShell Module.PrismWebBuilder: Web Builder ProjectProjeto Northwind: Northwind - FPUQLCF: QLCFShipwire API: Shipwire makes it easier for consumers of Shipwire's international shipment fulfilment service to integrate their XML API quickly and easily. Current features are: Inventory Service Rate Service (Shipping Costs) Future features are: Order Entry Service Order Tracking ServiceSmith XNA tools: Smith's XNA tools is a set of useful that i make to improve some basics features to XNA and make the Game Design More Easy.ST Recover: ST Recover can read Atari ST floppy disks on a PC under Windows, including special formats as 800 or 900 KB and damaged or desynchronized disks, and produces standard .ST disk image files. Then the image files can be read in ST emulators as WinSTon or Steem.SyncSMS: Windows desktop client for the Android App SyncSMS. This code is not affiliated with SyncSMS in any way,tempzz: tempzzTruxtor: Truxtor modular concept for electronic gadgetsTT SA TEST1: TT SA Test 1VisualQuantCode: Neuroquants is a library in c# for quants It's developed in c#.Weeps: Generate Bass and drum line in type of midi to be guitar's backing track that was playing by userxxtest: xxtest???-????? "???????????": ?????-????????? ???????????? Digital Design 2012. ??? ????. ?????? ?: ?????????????????? ??????? ?????. ??????? ?????????? ????????? ????????. ?????????? ?????????????? ??????? ??????? ? ????? ???????????? Digital Design 2012.????? ???????????? Digital Design 2012. ??? ????. ?????? ?: ?????????????????? ??????? ?????. ??????? ?????????? ????????? ????????. ?????????? ?????????????? ??????? ??????? ? ????? ???????????? Digital Design 2012.

    Read the article

  • Optimizing a 3D World Javascript Animation

    - by johnny
    Hi! I've recently come up with the idea to create a tag cloud like animation shaped like the earth. I've extracted the coastline coordinates from ngdc.noaa.gov and wrote a little script that displayed it in my browser. Now as you can imagine, the whole coastline consists of about 48919 points, which my script would individually render (each coordinate being represented by one span). Obviously no browser is capable of rendering this fluently - but it would be nice if I could render as much as let's say 200 spans (twice as much as now) on my old p4 2.8 Ghz (as a representative benchmark). Are there any javascript optimizations I could use in order to speed up the display of those spans? One 'coordinate': <div id="world_pixels"> <span id="wp_0" style="position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;" onmouseover="magnify_world_pixel('wp_0');" onmouseout="shrink_world_pixel('wp_0');" onClick="set_askcue_bar('', 'new york')">new york</span> </div> The script: $(document).ready(function(){ world_pixels = $("#world_pixels span"); world_pixels.spin(); setInterval("world_pixels.spin()",1500); }); z = new Array(); $.fn.spin = function () { for(i=0; i<this.length; i++) { /*actual screen coordinates: x/y/z --> left/font-size/top 300/13/0 300/6/300 | / |/ 0/13/300 ----|---- 600/13/300 /| / | 300/20/300 300/13/600 */ /*scale font size*/ var resize_x = 1; /*scale width*/ var resize_y = 2.5; /*scale height*/ var resize_z = 2.5; var from_left = 300; var from_top = 20; /*actual math coordinates: 1 -1 | / |/ 1 ----|---- -1 /| / | 1 -1 */ //var get_element = document.getElementById(); //var font_size = parseInt(this.style.fontSize); var font_size = parseInt($(this[i]).css("font-size")); var left = parseInt($(this[i]).css("left")); if (coast_line_array[i][1]) { } else { var top = parseInt($(this[i]).css("top")); z[i] = from_top + (top - (300 * resize_z)) / (300 * resize_z); //global beacause it's used in other functions later on var top_new = from_top + Math.round(Math.cos(coast_line_array[i][2]/90*Math.PI) * (300 * resize_z) + (300 * resize_z)); $(this[i]).css("top", top_new); coast_line_array[i][3] = 1; } var x = resize_x * (font_size - 13) / 7; var y = from_left + (left- (300 * resize_y)) / (300 * resize_y); if (y >= 0) { this[i].phi = Math.acos(x/(Math.sqrt(x^2 + y^2))); } else { this[i].phi = 2*Math.PI - Math.acos(x/(Math.sqrt(x^2 + y^2))); i } this[i].theta = Math.acos(z[i]/Math.sqrt(x^2 + y^2 + z[i]^2)); var font_size_new = resize_x * Math.round(Math.sin(coast_line_array[i][4]/90*Math.PI) * Math.cos(coast_line_array[i][0]/180*Math.PI) * 7 + 13); var left_new = from_left + Math.round(Math.sin(coast_line_array[i][5]/90*Math.PI) * Math.sin(coast_line_array[i][0]/180*Math.PI) * (300 * resize_y) + (300 * resize_y)); //coast_line_array[i][6] = coast_line_array[i][7]+1; if ((coast_line_array[i][0] + 1) > 180) { coast_line_array[i][0] = -180; } else { coast_line_array[i][0] = coast_line_array[i][0] + 0.25; } $(this[i]).css("font-size", font_size_new); $(this[i]).css("left", left_new); } } resize_x = 1; function magnify_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*30+"px" }, { duration: 1000 }); } function shrink_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*6+"px" }, { duration: 1000 }); } I'd appreciate any suggestions to optimize my script, maybe there is even a totally different approach on how to go about this. The whole .js file which stores the array for all the coordinates is available on my page, the file is about 2.9 mb, so you might consider pulling the .zip for local testing: metaroulette.com/files/31218.zip metaroulette.com/files/31218.js P.S. the php I use to create the spans: <?php //$arbitrary_characters = array('a','b','c','ddsfsdfsdf','e','f','g','h','isdfsdffd','j','k','l','mfdgcvbcvbs','n','o','p','q','r','s','t','uasdfsdf','v','w','x','y','z','0','1','2','3','4','5','6','7','8','9',); $arbitrary_characters = array('cat','table','cool','deloitte','askcue','what','more','less','adjective','nice','clinton','mars','jupiter','testversion','beta','hilarious','lolcatz','funny','obama','president','nice','what','misplaced','category','people','religion','global','skyscraper','new york','dubai','helsinki','volcano','iceland','peter','telephone','internet', 'dialer', 'cord', 'movie', 'party', 'chris', 'guitar', 'bentley', 'ford', 'ferrari', 'etc', 'de facto'); for ($i=0; $i<96; $i++) { $arb_digits = rand (0,45); $arbitrary_character = $arbitrary_characters[$arb_digits]; //$arbitrary_character = "."; echo "<span id=\"wp_$i\" style=\"position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;\" onmouseover=\"magnify_world_pixel('wp_$i');\" onmouseout=\"shrink_world_pixel('wp_$i');\" onClick=\"set_askcue_bar('', '$arbitrary_character')\">$arbitrary_character</span>\n"; } ?>

    Read the article

  • MVVM - implementing 'IsDirty' functionality to a ModelView in order to save data

    - by Brendan
    Hi, Being new to WPF & MVVM I struggling with some basic functionality. Let me first explain what I am after, and then attach some example code... I have a screen showing a list of users, and I display the details of the selected user on the right-hand side with editable textboxes. I then have a Save button which is DataBound, but I would only like this button to display when data has actually changed. ie - I need to check for "dirty data". I have a fully MVVM example in which I have a Model called User: namespace Test.Model { class User { public string UserName { get; set; } public string Surname { get; set; } public string Firstname { get; set; } } } Then, the ViewModel looks like this: using System.Collections.ObjectModel; using System.Collections.Specialized; using System.Windows.Input; using Test.Model; namespace Test.ViewModel { class UserViewModel : ViewModelBase { //Private variables private ObservableCollection<User> _users; RelayCommand _userSave; //Properties public ObservableCollection<User> User { get { if (_users == null) { _users = new ObservableCollection<User>(); //I assume I need this Handler, but I am stuggling to implement it successfully //_users.CollectionChanged += HandleChange; //Populate with users _users.Add(new User {UserName = "Bob", Firstname="Bob", Surname="Smith"}); _users.Add(new User {UserName = "Smob", Firstname="John", Surname="Davy"}); } return _users; } } //Not sure what to do with this?!?! //private void HandleChange(object sender, NotifyCollectionChangedEventArgs e) //{ // if (e.Action == NotifyCollectionChangedAction.Remove) // { // foreach (TestViewModel item in e.NewItems) // { // //Removed items // } // } // else if (e.Action == NotifyCollectionChangedAction.Add) // { // foreach (TestViewModel item in e.NewItems) // { // //Added items // } // } //} //Commands public ICommand UserSave { get { if (_userSave == null) { _userSave = new RelayCommand(param => this.UserSaveExecute(), param => this.UserSaveCanExecute); } return _userSave; } } void UserSaveExecute() { //Here I will call my DataAccess to actually save the data } bool UserSaveCanExecute { get { //This is where I would like to know whether the currently selected item has been edited and is thus "dirty" return false; } } //constructor public UserViewModel() { } } } The "RelayCommand" is just a simple wrapper class, as is the "ViewModelBase". (I'll attach the latter though just for clarity) using System; using System.ComponentModel; namespace Test.ViewModel { public abstract class ViewModelBase : INotifyPropertyChanged, IDisposable { protected ViewModelBase() { } public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged(string propertyName) { PropertyChangedEventHandler handler = this.PropertyChanged; if (handler != null) { var e = new PropertyChangedEventArgs(propertyName); handler(this, e); } } public void Dispose() { this.OnDispose(); } protected virtual void OnDispose() { } } } Finally - the XAML <Window x:Class="Test.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:vm="clr-namespace:Test.ViewModel" Title="MainWindow" Height="350" Width="525"> <Window.DataContext> <vm:UserViewModel/> </Window.DataContext> <Grid> <ListBox Height="238" HorizontalAlignment="Left" Margin="12,12,0,0" Name="listBox1" VerticalAlignment="Top" Width="197" ItemsSource="{Binding Path=User}" IsSynchronizedWithCurrentItem="True"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=Firstname}"/> <TextBlock Text="{Binding Path=Surname}"/> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> <Label Content="Username" Height="28" HorizontalAlignment="Left" Margin="232,16,0,0" Name="label1" VerticalAlignment="Top" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="323,21,0,0" Name="textBox1" VerticalAlignment="Top" Width="120" Text="{Binding Path=User/UserName}" /> <Label Content="Surname" Height="28" HorizontalAlignment="Left" Margin="232,50,0,0" Name="label2" VerticalAlignment="Top" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="323,52,0,0" Name="textBox2" VerticalAlignment="Top" Width="120" Text="{Binding Path=User/Surname}" /> <Label Content="Firstname" Height="28" HorizontalAlignment="Left" Margin="232,84,0,0" Name="label3" VerticalAlignment="Top" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="323,86,0,0" Name="textBox3" VerticalAlignment="Top" Width="120" Text="{Binding Path=User/Firstname}" /> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="368,159,0,0" Name="button1" VerticalAlignment="Top" Width="75" Command="{Binding Path=UserSave}" /> </Grid> </Window> So basically, when I edit a surname, the Save button should be enabled; and if I undo my edit - well then it should be Disabled again as nothing has changed. I have seen this in many examples, but have not yet found out how to do it. Any help would be much appreciated! Brendan

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Copying one form's values to another form using JQuery

    - by rsturim
    I have a "shipping" form that I want to offer users the ability to copy their input values over to their "billing" form by simply checking a checkbox. I've coded up a solution that works -- but, I'm sort of new to jQuery and wanted some criticism on how I went about achieving this. Is this well done -- any refactorings you'd recommend? Any advice would be much appreciated! The Script <script type="text/javascript"> $(function() { $("#copy").click(function() { if($(this).is(":checked")){ var $allShippingInputs = $(":input:not(input[type=submit])", "form#shipping"); $allShippingInputs.each(function() { var billingInput = "#" + this.name.replace("ship", "bill"); $(billingInput).val($(this).val()); }) //console.log("checked"); } else { $(':input','#billing') .not(':button, :submit, :reset, :hidden') .val('') .removeAttr('checked') .removeAttr('selected'); //console.log("not checked") } }); }); </script> The Form <div> <form action="" method="get" name="shipping" id="shipping"> <fieldset> <legend>Shipping</legend> <ul> <li> <label for="ship_first_name">First Name:</label> <input type="text" name="ship_first_name" id="ship_first_name" value="John" size="" /> </li> <li> <label for="ship_last_name">Last Name:</label> <input type="text" name="ship_last_name" id="ship_last_name" value="Smith" size="" /> </li> <li> <label for="ship_state">State:</label> <select name="ship_state" id="ship_state"> <option value="RI">Rhode Island</option> <option value="VT" selected="selected">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="ship_zip_code">Zip Code</label> <input type="text" name="ship_zip_code" id="ship_zip_code" value="05401" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div> <div> <form action="" method="get" name="billing" id="billing"> <fieldset> <legend>Billing</legend> <ul> <li> <input type="checkbox" name="copy" id="copy" /> <label for="copy">Same of my shipping</label> </li> <li> <label for="bill_first_name">First Name:</label> <input type="text" name="bill_first_name" id="bill_first_name" value="" size="" /> </li> <li> <label for="bill_last_name">Last Name:</label> <input type="text" name="bill_last_name" id="bill_last_name" value="" size="" /> </li> <li> <label for="bill_state">State:</label> <select name="bill_state" id="bill_state"> <option>-- Choose State --</option> <option value="RI">Rhode Island</option> <option value="VT">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="bill_zip_code">Zip Code</label> <input type="text" name="bill_zip_code" id="bill_zip_code" value="" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div>

    Read the article

  • How to match ColdFusion encryption with Java 1.4.2?

    - by JohnTheBarber
    * sweet - thanks to Edward Smith for the CF Technote that indicated the key from ColdFusion was Base64 encoded. See generateKey() for the 'fix' My task is to use Java 1.4.2 to match the results a given ColdFusion code sample for encryption. Known/given values: A 24-byte key A 16-byte salt (IVorSalt) Encoding is Hex Encryption algorithm is AES/CBC/PKCS5Padding A sample clear-text value The encrypted value of the sample clear-text after going through the ColdFusion code Assumptions: Number of iterations not specified in the ColdFusion code so I assume only one iteration 24-byte key so I assume 192-bit encryption Given/working ColdFusion encryption code sample: <cfset ThisSalt = "16byte-salt-here"> <cfset ThisAlgorithm = "AES/CBC/PKCS5Padding"> <cfset ThisKey = "a-24byte-key-string-here"> <cfset thisAdjustedNow = now()> <cfset ThisDateTimeVar = DateFormat( thisAdjustedNow , "yyyymmdd" )> <cfset ThisDateTimeVar = ThisDateTimeVar & TimeFormat( thisAdjustedNow , "HHmmss" )> <cfset ThisTAID = ThisDateTimeVar & "|" & someOtherData> <cfset ThisTAIDEnc = Encrypt( ThisTAID , ThisKey , ThisAlgorithm , "Hex" , ThisSalt)> My Java 1.4.2 encryption/decryption code swag: package so.example; import java.security.*; import javax.crypto.Cipher; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import org.apache.commons.codec.binary.*; public class SO_AES192 { private static final String _AES = "AES"; private static final String _AES_CBC_PKCS5Padding = "AES/CBC/PKCS5Padding"; private static final String KEY_VALUE = "a-24byte-key-string-here"; private static final String SALT_VALUE = "16byte-salt-here"; private static final int ITERATIONS = 1; private static IvParameterSpec ivParameterSpec; public static String encryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.ENCRYPT_MODE, key, ivParameterSpec); String valueToEncrypt = null; String eValue = value; for (int i = 0; i < ITERATIONS; i++) { // valueToEncrypt = SALT_VALUE + eValue; // pre-pend salt - Length > sample length valueToEncrypt = eValue; // don't pre-pend salt Length = sample length byte[] encValue = c.doFinal(valueToEncrypt.getBytes()); eValue = Hex.encodeHexString(encValue); } return eValue; } public static String decryptHex(String value) throws Exception { Key key = generateKey(); Cipher c = Cipher.getInstance(_AES_CBC_PKCS5Padding); ivParameterSpec = new IvParameterSpec(SALT_VALUE.getBytes()); c.init(Cipher.DECRYPT_MODE, key, ivParameterSpec); String dValue = null; char[] valueToDecrypt = value.toCharArray(); for (int i = 0; i < ITERATIONS; i++) { byte[] decordedValue = Hex.decodeHex(valueToDecrypt); byte[] decValue = c.doFinal(decordedValue); // dValue = new String(decValue).substring(SALT_VALUE.length()); // when salt is pre-pended dValue = new String(decValue); // when salt is not pre-pended valueToDecrypt = dValue.toCharArray(); } return dValue; } private static Key generateKey() throws Exception { // Key key = new SecretKeySpec(KEY_VALUE.getBytes(), _AES); // this was wrong Key key = new SecretKeySpec(new BASE64Decoder().decodeBuffer(keyValueString), _AES); // had to un-Base64 the 'known' 24-byte key. return key; } } I cannot create a matching encrypted value nor decrypt a given encrypted value. My guess is it's something to do with how I'm handling the initial vector/salt. I'm not very crypto-savvy but I'm thinking I should be able to take the sample clear-text and produce the same encrypted value in Java as ColdFusion produced. I am able to encrypt/decrypt my own data with my Java code (so I'm consistent) but I cannot match nor decrypt the ColdFusion sample encrypted value. I have access to a local webservice that can test the encrypted output. The given ColdFusion output sample passes/decrypts fine (of course). If I try to decrypt the same sample with my Java code (using the actual key and salt) I get a "Given final block not properly padded" error. I get the same net result when I pass my attempt at encryption (using the actual key and salt) to the test webservice. Any Ideas?

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    This article is a continuation of my previous entry where I explained how OIF/IdP leverages OAM to authenticate users at runtime: OIF/IdP internally forwards the user to OAM and indicates which Authentication Scheme should be used to challenge the user if needed OAM determine if the user should be challenged (user already authenticated, session timed out or not, session authentication level equal or higher than the level of the authentication scheme specified by OIF/IdP…) After identifying the user, OAM internally forwards the user back to OIF/IdP OIF/IdP can resume its operation In this article, I will discuss how OIF/IdP can be configured to map Federation Authentication Methods to OAM Authentication Schemes: When processing an Authn Request, where the SP requests a specific Federation Authentication Method with which the user should be challenged When sending an Assertion, where OIF/IdP sets the Federation Authentication Method in the Assertion Enjoy the reading! Overview The various Federation protocols support mechanisms allowing the partners to exchange information on: How the user should be challenged, when the SP/RP makes a request How the user was challenged, when the IdP/OP issues an SSO response When a remote SP partner redirects the user to OIF/IdP for Federation SSO, the message might contain data requesting how the user should be challenged by the IdP: this is treated as the Requested Federation Authentication Method. OIF/IdP will need to map that Requested Federation Authentication Method to a local Authentication Scheme, and then invoke OAM for user authentication/challenge with the mapped Authentication Scheme. OAM would authenticate the user if necessary with the scheme specified by OIF/IdP. Similarly, when an IdP issues an SSO response, most of the time it will need to include an identifier representing how the user was challenged: this is treated as the Federation Authentication Method. When OIF/IdP issues an Assertion, it will evaluate the Authentication Scheme with which OAM identified the user: If the Authentication Scheme can be mapped to a Federation Authentication Method, then OIF/IdP will use the result of that mapping in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled If the Authentication Scheme cannot be mapped, then OIF/IdP will set the Federation Authentication Method as the Authentication Scheme name in the outgoing SSO response: AuthenticationStatement in the SAML Assertion OpenID Response, if PAPE is enabled Mappings In OIF/IdP, the mapping between Federation Authentication Methods and Authentication Schemes has the following rules: One Federation Authentication Method can be mapped to several Authentication Schemes In a Federation Authentication Method <-> Authentication Schemes mapping, a single Authentication Scheme is marked as the default scheme that will be used to authenticate a user, if the SP/RP partner requests the user to be authenticated via a specific Federation Authentication Method An Authentication Scheme can be mapped to a single Federation Authentication Method Let’s examine the following example and the various use cases, based on the SAML 2.0 protocol: Mappings defined as: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapped to LDAPScheme, marked as the default scheme used for authentication BasicScheme urn:oasis:names:tc:SAML:2.0:ac:classes:X509 mapped to X509Scheme, marked as the default scheme used for authentication Use cases: SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:X509 as the RequestedAuthnContext: OIF/IdP will authenticate the use with X509Scheme since it is the default scheme mapped for that method. SP sends an AuthnRequest specifying urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the RequestedAuthnContext: OIF/IdP will authenticate the use with LDAPScheme since it is the default scheme mapped for that method, not the BasicScheme SP did not request any specific methods, and user was authenticated with BasisScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with LDAPScheme: OIF/IdP will issue an Assertion with urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport as the FederationAuthenticationMethod SP did not request any specific methods, and user was authenticated with BasisSessionlessScheme: OIF/IdP will issue an Assertion with BasisSessionlessScheme as the FederationAuthenticationMethod, since that scheme could not be mapped to any Federation Authentication Method (in this case, the administrator would need to correct that and create a mapping) Configuration Mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. Authentication Schemes As discussed in the previous article, during Federation SSO, OIF/IdP will internally forward the user to OAM for authentication/verification and specify which Authentication Scheme to use. OAM will determine if a user needs to be challenged: If the user is not authenticated yet If the user is authenticated but the session timed out If the user is authenticated, but the authentication scheme level of the original authentication is lower than the level of the authentication scheme requested by OIF/IdP So even though an SP requests a specific Federation Authentication Method to be used to challenge the user, if that method is mapped to an Authentication Scheme and that at runtime OAM deems that the user does not need to be challenged with that scheme (because the user is already authenticated, session did not time out, and the session authn level is equal or higher than the one for the specified Authentication Scheme), the flow won’t result in a challenge operation. Protocols SAML 2.0 The SAML 2.0 specifications define the following Federation Authentication Methods for SAML 2.0 flows: urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocol urn:oasis:names:tc:SAML:2.0:ac:classes:Telephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:PersonalTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession urn:oasis:names:tc:SAML:2.0:ac:classes:MobileOneFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:Smartcard urn:oasis:names:tc:SAML:2.0:ac:classes:Password urn:oasis:names:tc:SAML:2.0:ac:classes:InternetProtocolPassword urn:oasis:names:tc:SAML:2.0:ac:classes:X509 urn:oasis:names:tc:SAML:2.0:ac:classes:TLSClient urn:oasis:names:tc:SAML:2.0:ac:classes:PGP urn:oasis:names:tc:SAML:2.0:ac:classes:SPKI urn:oasis:names:tc:SAML:2.0:ac:classes:XMLDSig urn:oasis:names:tc:SAML:2.0:ac:classes:SoftwarePKI urn:oasis:names:tc:SAML:2.0:ac:classes:Kerberos urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport urn:oasis:names:tc:SAML:2.0:ac:classes:SecureRemotePassword urn:oasis:names:tc:SAML:2.0:ac:classes:NomadTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:AuthenticatedTelephony urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorUnregistered urn:oasis:names:tc:SAML:2.0:ac:classes:MobileTwoFactorContract urn:oasis:names:tc:SAML:2.0:ac:classes:SmartcardPKI urn:oasis:names:tc:SAML:2.0:ac:classes:TimeSyncToken Out of the box, OIF/IdP has the following mappings for the SAML 2.0 protocol: Only urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml20-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 2.0 An example of an AuthnRequest message sent by an SP to an IdP with the SP requesting a specific Federation Authentication Method to be used to challenge the user would be: <samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" Destination="https://idp.com/oamfed/idp/samlv20" ID="id-8bWn-A9o4aoMl3Nhx1DuPOOjawc-" IssueInstant="2014-03-21T20:51:11Z" Version="2.0">  <saml:Issuer ...>https://acme.com/sp</saml:Issuer>  <samlp:NameIDPolicy AllowCreate="false" Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"/>  <samlp:RequestedAuthnContext Comparison="minimum">    <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">      urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport </saml:AuthnContextClassRef>  </samlp:RequestedAuthnContext></samlp:AuthnRequest> An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                    urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> An administrator would be able to specify a mapping between a SAML 2.0 Federation Authentication Method and one or more OAM Authentication Schemes SAML 1.1 The SAML 1.1 specifications define the following Federation Authentication Methods for SAML 1.1 flows: urn:oasis:names:tc:SAML:1.0:am:unspecified urn:oasis:names:tc:SAML:1.0:am:HardwareToken urn:oasis:names:tc:SAML:1.0:am:password urn:oasis:names:tc:SAML:1.0:am:X509-PKI urn:ietf:rfc:2246 urn:oasis:names:tc:SAML:1.0:am:PGP urn:oasis:names:tc:SAML:1.0:am:SPKI urn:ietf:rfc:3075 urn:oasis:names:tc:SAML:1.0:am:XKMS urn:ietf:rfc:1510 urn:ietf:rfc:2945 Out of the box, OIF/IdP has the following mappings for the SAML 1.1 protocol: Only urn:oasis:names:tc:SAML:1.0:am:password is defined This Federation Authentication Method is mapped to: LDAPScheme, marked as the default scheme used for authentication FAAuthScheme BasicScheme BasicFAScheme This mapping is defined in the saml11-sp-partner-profile SP Partner Profile which is the default OOTB SP Partner Profile for SAML 1.1 An example of an Assertion issued by an IdP would be: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameID ...>[email protected]</saml:NameID>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Note: SAML 1.1 does not define an AuthnRequest message. An administrator would be able to specify a mapping between a SAML 1.1 Federation Authentication Method and one or more OAM Authentication Schemes OpenID 2.0 The OpenID 2.0 PAPE specifications define the following Federation Authentication Methods for OpenID 2.0 flows: http://schemas.openid.net/pape/policies/2007/06/phishing-resistant http://schemas.openid.net/pape/policies/2007/06/multi-factor http://schemas.openid.net/pape/policies/2007/06/multi-factor-physical Out of the box, OIF/IdP does not define any mappings for the OpenID 2.0 Federation Authentication Methods. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. An example of an OpenID 2.0 Request message sent by an SP/RP to an IdP/OP would be: https://idp.com/openid?openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=checkid_setup&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.realm=https%3A%2F%2Facme.com%2Fopenid&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_request&openid.ax.type.attr0=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.if_available=attr0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.max_auth_age=0 An example of an Open ID 2.0 SSO Response issued by an IdP/OP would be: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will provide examples on how to configure OIF/IdP for the various protocols, to map OAM Authentication Schemes to Federation Authentication Methods.Cheers,Damien Carru

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • Memory leak involving jQuery Ajax requests

    - by Eli Courtwright
    I have a webpage that's leaking memory in both IE8 and Firefox; the memory usage displayed in the Windows Process Explorer just keeps growing over time. The following page requests the "unplanned.json" url, which is a static file that never changes (though I do set my Cache-control HTTP header to no-cache to make sure that the Ajax request always goes through). When it gets the results, it clears out an HTML table, loops over the json array it got back from the server, and dynamically adds a row to an HTML table for each entry in the array. Then it waits 2 seconds and repeats this process. Here's the entire webpage: <html> <head> <title>Test Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> </head> <body> <script type="text/javascript"> function kickoff() { $.getJSON("unplanned.json", resetTable); } function resetTable(rows) { $("#content tbody").empty(); for(var i=0; i<rows.length; i++) { $("<tr>" + "<td>" + rows[i].mpe_name + "</td>" + "<td>" + rows[i].bin + "</td>" + "<td>" + rows[i].request_time + "</td>" + "<td>" + rows[i].filtered_delta + "</td>" + "<td>" + rows[i].failed_delta + "</td>" + "</tr>").appendTo("#content tbody"); } setTimeout(kickoff, 2000); } $(kickoff); </script> <table id="content" border="1" style="width:100% ; text-align:center"> <thead><tr> <th>MPE</th> <th>Bin</th> <th>When</th> <th>Filtered</th> <th>Failed</th> </tr></thead> <tbody></tbody> </table> </body> </html> If it helps, here's an example of the json I'm sending back (it's this exact array wuith thousands of entries instead of just one): [ { mpe_name: "DBOSS-995", request_time: "09/18/2009 11:51:06", bin: 4, filtered_delta: 1, failed_delta: 1 } ] EDIT: I've accepted Toran's extremely helpful answer, but I feel I should post some additional code, since his removefromdom jQuery plugin has some limitations: It only removes individual elements. So you can't give it a query like `$("#content tbody tr")` and expect it to remove all of the elements you've specified. Any element that you remove with it must have an `id` attribute. So if I want to remove my `tbody`, then I must assign an `id` to my `tbody` tag or else it will give an error. It removes the element itself and all of its descendants, so if you simply want to empty that element then you'll have to re-create it afterwards (or modify the plugin to empty instead of remove). So here's my page above modified to use Toran's plugin. For the sake of simplicity I didn't apply any of the general performance advice offered by Peter. Here's the page which now no longer memory leaks: <html> <head> <title>Test Page</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script> </head> <body> <script type="text/javascript"> <!-- $.fn.removefromdom = function(s) { if (!this) return; var el = document.getElementById(this.attr("id")); if (!el) return; var bin = document.getElementById("IELeakGarbageBin"); //before deleting el, recursively delete all of its children. while (el.childNodes.length > 0) { if (!bin) { bin = document.createElement("DIV"); bin.id = "IELeakGarbageBin"; document.body.appendChild(bin); } bin.appendChild(el.childNodes[el.childNodes.length - 1]); bin.innerHTML = ""; } el.parentNode.removeChild(el); if (!bin) { bin = document.createElement("DIV"); bin.id = "IELeakGarbageBin"; document.body.appendChild(bin); } bin.appendChild(el); bin.innerHTML = ""; }; var resets = 0; function kickoff() { $.getJSON("unplanned.json", resetTable); } function resetTable(rows) { $("#content tbody").removefromdom(); $("#content").append('<tbody id="id_field_required"></tbody>'); for(var i=0; i<rows.length; i++) { $("#content tbody").append("<tr><td>" + rows[i].mpe_name + "</td>" + "<td>" + rows[i].bin + "</td>" + "<td>" + rows[i].request_time + "</td>" + "<td>" + rows[i].filtered_delta + "</td>" + "<td>" + rows[i].failed_delta + "</td></tr>"); } resets++; $("#message").html("Content set this many times: " + resets); setTimeout(kickoff, 2000); } $(kickoff); // --> </script> <div id="message" style="color:red"></div> <table id="content" border="1" style="width:100% ; text-align:center"> <thead><tr> <th>MPE</th> <th>Bin</th> <th>When</th> <th>Filtered</th> <th>Failed</th> </tr></thead> <tbody id="id_field_required"></tbody> </table> </body> </html> FURTHER EDIT: I'll leave my question unchanged, though it's worth noting that this memory leak has nothing to do with Ajax. In fact, the following code would memory leak just the same and be just as easily solved with Toran's removefromdom jQuery plugin: function resetTable() { $("#content tbody").empty(); for(var i=0; i<1000; i++) { $("#content tbody").append("<tr><td>" + "DBOSS-095" + "</td>" + "<td>" + 4 + "</td>" + "<td>" + "09/18/2009 11:51:06" + "</td>" + "<td>" + 1 + "</td>" + "<td>" + 1 + "</td></tr>"); } setTimeout(resetTable, 2000); } $(resetTable);

    Read the article

  • Nested property binding

    - by EtherealMonkey
    Recently, I have been trying to wrap my mind around the BindingList<T> and INotifyPropertChanged. More specifically - How do I make a collection of objects (having objects as properties) which will allow me to subscribe to events throughout the tree? To that end, I have examined the code offered as examples by others. One such project that I downloaded was Nested Property Binding - CodeProject by "seesharper". Now, the article explains the implementation, but there was a question by "Someone@AnotherWorld" about "INotifyPropertyChanged in nested objects". His question was: Hi, nice stuff! But after a couple of time using your solution I realize the ObjectBindingSource ignores the PropertyChanged event of nested objects. E.g. I've got a class 'Foo' with two properties named 'Name' and 'Bar'. 'Name' is a string an 'Bar' reference an instance of class 'Bar', which has a 'Name' property of type string too and both classes implements INotifyPropertyChanged. With your binding source reading and writing with both properties ('Name' and 'Bar_Name') works fine but the PropertyChanged event works only for the 'Name' property, because the binding source listen only for events of 'Foo'. One workaround is to retrigger the PropertyChanged event in the appropriate class (here 'Foo'). What's very unclean! The other approach would be to extend ObjectBindingSource so that all owner of nested property which implements INotifyPropertyChanged get used for receive changes, but how? Thanks! I had asked about BindingList<T> yesterday and received a good answer from Aaronaught. In my question, I had a similar point as "Someone@AnotherWorld": if Keywords were to implement INotifyPropertyChanged, would changes be accessible to the BindingList through the ScannedImage object? To which Aaronaught's response was: No, they will not. BindingList only looks at the specific object in the list, it has no ability to scan all dependencies and monitor everything in the graph (nor would that always be a good idea, if it were possible). I understand Aaronaught's comment regarding this behavior not necessarily being a good idea. Additionally, his suggestion to have my bound object "relay" events on behalf of it's member objects works fine and is perfectly acceptable. For me, "re-triggering" the PropertyChanged event does not seem so unclean as "Someone@AnotherWorld" laments. I do understand why he protests - in the interest of loosely coupled objects. However, I believe that coupling between objects that are part of a composition is logical and not so undesirable as this may be in other scenarios. (I am a newb, so I could be waaayyy off base.) Anyway, in the interest of exploring an answer to the question by "Someone@AnotherWorld", I altered the MainForm.cs file of the example project from Nested Property Binding - CodeProject by "seesharper" to the following: using System; using System.Collections.Generic; using System.ComponentModel; using System.Core.ComponentModel; using System.Windows.Forms; namespace ObjectBindingSourceDemo { public partial class MainForm : Form { private readonly List<Customer> _customers = new List<Customer>(); private readonly List<Product> _products = new List<Product>(); private List<Order> orders; public MainForm() { InitializeComponent(); dataGridView1.AutoGenerateColumns = false; dataGridView2.AutoGenerateColumns = false; CreateData(); } private void CreateData() { _customers.Add( new Customer(1, "Jane Wilson", new Address("98104", "6657 Sand Pointe Lane", "Seattle", "USA"))); _customers.Add( new Customer(1, "Bill Smith", new Address("94109", "5725 Glaze Drive", "San Francisco", "USA"))); _customers.Add( new Customer(1, "Samantha Brown", null)); _products.Add(new Product(1, "Keyboard", 49.99)); _products.Add(new Product(2, "Mouse", 10.99)); _products.Add(new Product(3, "PC", 599.99)); _products.Add(new Product(4, "Monitor", 299.99)); _products.Add(new Product(5, "LapTop", 799.99)); _products.Add(new Product(6, "Harddisc", 89.99)); customerBindingSource.DataSource = _customers; productBindingSource.DataSource = _products; orders = new List<Order>(); orders.Add(new Order(1, DateTime.Now, _customers[0])); orders.Add(new Order(2, DateTime.Now, _customers[1])); orders.Add(new Order(3, DateTime.Now, _customers[2])); #region Added by me OrderLine orderLine1 = new OrderLine(_products[0], 1); OrderLine orderLine2 = new OrderLine(_products[1], 3); orderLine1.PropertyChanged += new PropertyChangedEventHandler(OrderLineChanged); orderLine2.PropertyChanged += new PropertyChangedEventHandler(OrderLineChanged); orders[0].OrderLines.Add(orderLine1); orders[0].OrderLines.Add(orderLine2); #endregion // Removed by me in lieu of region above. //orders[0].OrderLines.Add(new OrderLine(_products[0], 1)); //orders[0].OrderLines.Add(new OrderLine(_products[1], 3)); ordersBindingSource.DataSource = orders; } #region Added by me // Have to wait until the form is Shown to wire up the events // for orderDetailsBindingSource. Otherwise, they are triggered // during MainForm().InitializeComponent(). private void MainForm_Shown(object sender, EventArgs e) { orderDetailsBindingSource.AddingNew += new AddingNewEventHandler(orderDetailsBindSrc_AddingNew); orderDetailsBindingSource.CurrentItemChanged += new EventHandler(orderDetailsBindSrc_CurrentItemChanged); orderDetailsBindingSource.ListChanged += new ListChangedEventHandler(orderDetailsBindSrc_ListChanged); } private void orderDetailsBindSrc_AddingNew( object sender, AddingNewEventArgs e) { } private void orderDetailsBindSrc_CurrentItemChanged( object sender, EventArgs e) { } private void orderDetailsBindSrc_ListChanged( object sender, ListChangedEventArgs e) { ObjectBindingSource bindingSource = (ObjectBindingSource)sender; if (!(bindingSource.Current == null)) { // Unsure if GetType().ToString() is required b/c ToString() // *seems* // to return the same value. if (bindingSource.Current.GetType().ToString() == "ObjectBindingSourceDemo.OrderLine") { if (e.ListChangedType == ListChangedType.ItemAdded) { // I wish that I knew of a way to determine // if the 'PropertyChanged' delegate assignment is null. // I don't like the current test, but it seems to work. if (orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].Product == null) { orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].PropertyChanged += new PropertyChangedEventHandler( OrderLineChanged); } } if (e.ListChangedType == ListChangedType.ItemDeleted) { // Will throw exception when leaving // an OrderLine row with unitialized properties. // // I presume this is because the item // has already been 'disposed' of at this point. // *but* // Will it be actually be released from memory // if the delegate assignment for PropertyChanged // was never removed??? if (orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].Product != null) { orders[ ordersBindingSource.Position].OrderLines[ e.NewIndex].PropertyChanged -= new PropertyChangedEventHandler( OrderLineChanged); } } } } } private void OrderLineChanged(object sender, PropertyChangedEventArgs e) { MessageBox.Show(e.PropertyName, "Property Changed:"); } #endregion } } In the method private void orderDetailsBindSrc_ListChanged(object sender, ListChangedEventArgs e) I am able to hook up the PropertyChangedEventHandler to the OrderLine object as it is being created. However, I cannot seem to find a way to unhook the PropertyChangedEventHandler from the OrderLine object before it is being removed from the orders[i].OrderLines list. So, my questions are: Am I simply trying to do something that is very, very wrong here? Will the OrderLines object that I add the delegate assignments to ever be released from memory if the assignment is not removed? Is there a "sane" method of achieving this scenario? Also, note that this question is not specifically related to my prior. I have actually solved the issue which had prompted me to inquire before. However, I have reached a point with this particular topic of discovery where my curiosity has exceeded my patience - hopefully someone here can shed some light on this?

    Read the article

  • Please help me understand why my XSL Transform is not transforming

    - by Damovisa
    I'm trying to transform one XML format to another using XSL. Try as I might, I can't seem to get a result. I've hacked away at this for a while now and I've had no success. I'm not even getting any exceptions. I'm going to post the entire code and hopefully someone can help me work out what I've done wrong. I'm aware there are likely to be problems in the xsl I have in terms of selects and matches, but I'm not fussed about that at the moment. The output I'm getting is the input XML without any XML tags. The transformation is simply not occurring. Here's my XML Document: <?xml version="1.0"?> <Transactions> <Account> <PersonalAccount> <AccountNumber>066645621</AccountNumber> <AccountName>A Smith</AccountName> <CurrentBalance>-200125.96</CurrentBalance> <AvailableBalance>0</AvailableBalance> <AccountType>LOAN</AccountType> </PersonalAccount> </Account> <StartDate>2010-03-01T00:00:00</StartDate> <EndDate>2010-03-23T00:00:00</EndDate> <Items> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>12000</Amount> <Reference>Transaction 1</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-15T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>11000</Amount> <Reference>Transaction 2</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-14T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> </Items> </Transactions> Here's my XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" /> <xsl:param name="currentdate"></xsl:param> <xsl:template match="Transactions"> <xsl:element name="OFX"> <xsl:element name="SIGNONMSGSRSV1"> <xsl:element name="SONRS"> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="DTSERVER"><xsl:value-of select="$currentdate" /></xsl:element> <xsl:element name="LANGUAGE">ENG</xsl:element> </xsl:element> </xsl:element> <xsl:element name="BANKMSGSRSV1"> <xsl:element name="STMTTRNRS"> <xsl:element name="TRNUID">1</xsl:element> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="STMTRS"> <xsl:element name="CURDEF">AUD</xsl:element> <xsl:element name="BANKACCTFROM"> <xsl:element name="BANKID">RAMS</xsl:element> <xsl:element name="ACCTID"><xsl:value-of select="Account/PersonalAccount/AccountNumber" /></xsl:element> <xsl:element name="ACCTTYPE"><xsl:value-of select="Account/PersonalAccount/AccountType" /></xsl:element> </xsl:element> <xsl:element name="BANKTRANLIST"> <xsl:element name="DTSTART"><xsl:value-of select="StartDate" /></xsl:element> <xsl:element name="DTEND"><xsl:value-of select="EndDate" /></xsl:element> <xsl:for-each select="Items/Transaction"> <xsl:element name="STMTTRN"> <xsl:element name="TRNTYPE"><xsl:choose><xsl:when test="IsCredit">CREDIT</xsl:when><xsl:otherwise>DEBIT</xsl:otherwise></xsl:choose></xsl:element> <xsl:element name="DTPOSTED"><xsl:value-of select="EffectiveDate" /></xsl:element> <xsl:element name="DTUSER"><xsl:value-of select="CreatedDate" /></xsl:element> <xsl:element name="TRNAMT"><xsl:value-of select="Amount" /></xsl:element> <xsl:element name="FITID" /> <xsl:element name="NAME"><xsl:value-of select="Reference" /></xsl:element> <xsl:element name="MEMO"><xsl:value-of select="Reference" /></xsl:element> </xsl:element> </xsl:for-each> </xsl:element> <xsl:element name="LEDGERBAL"> <xsl:element name="BALAMT"><xsl:value-of select="Account/PersonalAccount/CurrentBalance" /></xsl:element> <xsl:element name="DTASOF"><xsl:value-of select="EndDate" /></xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:template> </xsl:stylesheet> Here's my method to transform my XML: public string TransformToXml(XmlElement xmlElement, Dictionary<string, object> parameters) { string strReturn = ""; // Load the XSLT Document XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load(xsltFileName); // arguments XsltArgumentList args = new XsltArgumentList(); if (parameters != null && parameters.Count > 0) { foreach (string key in parameters.Keys) { args.AddParam(key, "", parameters[key]); } } //Create a memory stream to write to Stream objStream = new MemoryStream(); // Apply the transform xslt.Transform(xmlElement, args, objStream); objStream.Seek(0, SeekOrigin.Begin); // Read the contents of the stream StreamReader objSR = new StreamReader(objStream); strReturn = objSR.ReadToEnd(); return strReturn; } The contents of strReturn is an XML tag (<?xml version="1.0" encoding="utf-8"?>) followed by a raw dump of the contents of the original XML document, stripped of XML tags. What am I doing wrong here?

    Read the article

  • How can I add headers to DualList control wpf

    - by devnet247
    Hi all I am trying to write a Dual List usercontrol in wpf. I am new to wpf and I am finding it quite difficult. This is something I have put together in a couple of hours.It's not that good but a start. I would be extremely grateful if somebody with wpf experience could improve it. The aim is to simplify the usage as much as possible I am kind of stuck. I would like the user of the DualList Control to be able to set up headers how do you do that. Do I need to expose some dependency properties in my control? At the moment when loading the user has to pass a ObservableCollection is there a better way? Could you have a look and possibly make any suggestions with some code? Thanks a lot!!!!! xaml <Grid ShowGridLines="False"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="25px"></ColumnDefinition> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <StackPanel Orientation="Vertical" Grid.Column="0" Grid.Row="0"> <Label Name="lblLeftTitle" Content="Available"></Label> <ListView Name="lvwLeft"> </ListView> </StackPanel> <WrapPanel Grid.Column="1" Grid.Row="0"> <Button Name="btnMoveRight" Content=">" Width="25" Margin="0,35,0,0" Click="btnMoveRight_Click" /> <Button Name="btnMoveAllRight" Content=">>" Width="25" Margin="0,05,0,0" Click="btnMoveAllRight_Click" /> <Button Name="btnMoveLeft" Content="&lt;" Width="25" Margin="0,25,0,0" Click="btnMoveLeft_Click" /> <Button Name="btnMoveAllLeft" Content="&lt;&lt;" Width="25" Margin="0,05,0,0" Click="btnMoveAllLeft_Click" /> </WrapPanel> <StackPanel Orientation="Vertical" Grid.Column="2" Grid.Row="0"> <Label Name="lblRightTitle" Content="Selected"></Label> <ListView Name="lvwRight"> </ListView> </StackPanel> </Grid> Client public partial class DualListTest { public ObservableCollection<ListViewItem> LeftList { get; set; } public ObservableCollection<ListViewItem> RightList { get; set; } public DualListTest() { InitializeComponent(); LoadCustomers(); LoadDualList(); } private void LoadDualList() { dualList1.Load(LeftList, RightList); } private void LoadCustomers() { //Pretend we are getting a list of Customers from a repository. //Some go in the left List(Good Customers) some go in the Right List(Bad Customers). LeftList = new ObservableCollection<ListViewItem>(); RightList = new ObservableCollection<ListViewItem>(); var customers = GetCustomers(); foreach (var customer in customers) { if (customer.Status == CustomerStatus.Good) { LeftList.Add(new ListViewItem { Content = customer }); } else { RightList.Add(new ListViewItem{Content=customer }); } } } private static IEnumerable<Customer> GetCustomers() { return new List<Customer> { new Customer {Name = "Jo Blogg", Status = CustomerStatus.Good}, new Customer {Name = "Rob Smith", Status = CustomerStatus.Good}, new Customer {Name = "Michel Platini", Status = CustomerStatus.Good}, new Customer {Name = "Roberto Baggio", Status = CustomerStatus.Good}, new Customer {Name = "Gio Surname", Status = CustomerStatus.Bad}, new Customer {Name = "Diego Maradona", Status = CustomerStatus.Bad} }; } } UserControl public partial class DualList:UserControl { public ObservableCollection<ListViewItem> LeftListCollection { get; set; } public ObservableCollection<ListViewItem> RightListCollection { get; set; } public DualList() { InitializeComponent(); } public void Load(ObservableCollection<ListViewItem> leftListCollection, ObservableCollection<ListViewItem> rightListCollection) { LeftListCollection = leftListCollection; RightListCollection = rightListCollection; lvwLeft.ItemsSource = leftListCollection; lvwRight.ItemsSource = rightListCollection; EnableButtons(); } public static DependencyProperty LeftTitleProperty = DependencyProperty.Register("LeftTitle", typeof(string), typeof(Label)); public static DependencyProperty RightTitleProperty = DependencyProperty.Register("RightTitle", typeof(string), typeof(Label)); public static DependencyProperty LeftListProperty = DependencyProperty.Register("LeftList", typeof(ListView), typeof(DualList)); public static DependencyProperty RightListProperty = DependencyProperty.Register("RightList", typeof(ListView), typeof(DualList)); public string LeftTitle { get { return (string)lblLeftTitle.Content; } set { lblLeftTitle.Content = value; } } public string RightTitle { get { return (string)lblRightTitle.Content; } set { lblRightTitle.Content = value; } } public ListView LeftList { get { return lvwLeft; } set { lvwLeft = value; } } public ListView RightList { get { return lvwRight; } set { lvwRight = value; } } private void EnableButtons() { if (lvwLeft.Items.Count > 0) { btnMoveRight.IsEnabled = true; btnMoveAllRight.IsEnabled = true; } else { btnMoveRight.IsEnabled = false; btnMoveAllRight.IsEnabled = false; } if (lvwRight.Items.Count > 0) { btnMoveLeft.IsEnabled = true; btnMoveAllLeft.IsEnabled = true; } else { btnMoveLeft.IsEnabled = false; btnMoveAllLeft.IsEnabled = false; } if (lvwLeft.Items.Count != 0 || lvwRight.Items.Count != 0) return; btnMoveLeft.IsEnabled = false; btnMoveAllLeft.IsEnabled = false; btnMoveRight.IsEnabled = false; btnMoveAllRight.IsEnabled = false; } private void MoveRight() { while (lvwLeft.SelectedItems.Count > 0) { var selectedItem = (ListViewItem)lvwLeft.SelectedItem; LeftListCollection.Remove(selectedItem); RightListCollection.Add(selectedItem); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveAllRight() { while (lvwLeft.Items.Count > 0) { var item = (ListViewItem)lvwLeft.Items[lvwLeft.Items.Count - 1]; LeftListCollection.Remove(item); RightListCollection.Add(item); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveAllLeft() { while (lvwRight.Items.Count > 0) { var item = (ListViewItem)lvwRight.Items[lvwRight.Items.Count - 1]; RightListCollection.Remove(item); LeftListCollection.Add(item); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveLeft() { while (lvwRight.SelectedItems.Count > 0) { var selectedCustomer = (ListViewItem)lvwRight.SelectedItem; LeftListCollection.Add(selectedCustomer); RightListCollection.Remove(selectedCustomer); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void btnMoveLeft_Click(object sender, RoutedEventArgs e) { MoveLeft(); } private void btnMoveAllLeft_Click(object sender, RoutedEventArgs e) { MoveAllLeft(); } private void btnMoveRight_Click(object sender, RoutedEventArgs e) { MoveRight(); } private void btnMoveAllRight_Click(object sender, RoutedEventArgs e) { MoveAllRight(); } }

    Read the article

  • Please help to clean up my RoR development environment

    - by PeterWong
    I started RoR development a few months ago, and being new to Mac... Time flies and now I have a lot different ruby versions, rails versions and gems versions located everywhere......And currently I installed rvm and things got even worst, all things messed! And so I started want to clean all things and use rvm again! I want to uninstall all gems, all rails, and all ruby versions, except the system's default one (the very old one born with the mac). Or any other better solutions or suggestions!? Please help! there is some info that I think will be useful: which -a ruby /opt/local/bin/ruby /opt/local/bin/ruby /usr/local/bin/ruby /usr/bin/ruby /usr/local/bin/ruby which -a rails /usr/local/bin/rails /usr/bin/rails /usr/local/bin/rails which -a compass # simliar for rspec and many other gems /usr/local/bin/compass /usr/local/bin/compass gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) addressable (2.2.2) arel (2.0.6, 1.0.1, 1.0.0.rc1) authlogic (2.1.6, 2.1.3) aws-s3 (0.6.2) base32 (0.1.2) block_helpers (0.3.3) bluecloth (2.0.9) bowline (0.9.4) bowline-bundler (0.0.4) bson (1.1.2) builder (2.1.2) bundler (1.0.2, 1.0.0) compass (0.10.6) crack (0.1.7) devise (1.1.3) diff-lcs (1.1.2) differ (0.1.1) dynamic_form (1.1.3) engineyard (1.3.1) engineyard-serverside-adapter (1.3.3) erubis (2.6.6) escape (0.0.4) extlib (0.9.15) facebooker (1.0.75) faker (0.3.1) faraday (0.5.3, 0.5.2) fast_gettext (0.5.10, 0.4.17) fastercsv (1.5.3) fastthread (1.0.7) ffi (0.6.3) formatize (1.0.1) formtastic (1.1.0, 1.0.1) gemcutter (0.5.0) gettext (2.1.0) git (1.2.5) gosu (0.7.25 universal-darwin) haml (3.0.24, 3.0.23, 3.0.22, 3.0.21, 3.0.18) haml-rails (0.3.4) heroku (1.10.13, 1.9.13) highline (1.5.2) hirb (0.3.4, 0.3.3) hpricot (0.8.2) i18n (0.5.0, 0.4.2, 0.4.1, 0.3.7) jeweler (1.4.0) json (1.4.6) json_pure (1.4.3) linkedin (0.1.8) locale (2.0.5) mail (2.2.12, 2.2.11, 2.2.10, 2.2.9, 2.2.7, 2.2.6.1) memcache-client (1.8.5) meta_search (0.9.8, 0.9.7.2, 0.9.7.1, 0.9.6, 0.9.4) mime-types (1.16) mongo (1.1.2) mongoid (2.0.0.beta.20) multi_json (0.0.5) multipart-post (1.0.1) mysql (2.8.1) mysql2 (0.2.6, 0.2.4, 0.2.3) net-ldap (0.1.1) nice-ffi (0.4) nokogiri (1.4.4, 1.4.2) oa-basic (0.1.6) oa-core (0.1.6) oa-enterprise (0.1.6) oa-oauth (0.1.6) oa-openid (0.1.6) oauth (0.4.4, 0.4.3, 0.4.1) oauth-plugin (0.4.0.pre1) oauth2 (0.1.0) omniauth (0.1.6) paperclip (2.3.6, 2.3.4, 2.3.1.1) passenger (2.2.12) polyglot (0.3.1) pyu-ruby-sasl (0.0.3.2) querybuilder (0.9.2, 0.5.9) rack (1.2.1, 1.1.0, 1.0.1) rack-cache (0.5.3) rack-cache-purge (0.0.2, 0.0.1) rack-mount (0.6.13) rack-openid (1.2.0) rack-test (0.5.6, 0.5.4) railroady (0.11.2) rails (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.9, 2.3.5, 2.3.4) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) RedCloth (3.0.4) rest-client (1.6.1) roxml (3.1.5) rscribd (1.2.0) rspec (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-core (2.3.0, 2.2.1, 2.1.0, 2.0.1) rspec-expectations (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-mocks (2.3.0, 2.2.0, 2.1.0, 2.0.1) rspec-rails (2.3.0, 2.2.0, 2.1.0, 2.0.1) ruby-hmac (0.4.0) ruby-mysql (2.9.3) ruby-ole (1.2.10.1) ruby-openid (2.1.8) ruby-openid-apps-discovery (1.2.0) ruby-recaptcha (1.0.2, 1.0.0) ruby-sdl-ffi (0.3) ruby-termios (0.9.6) ruby_parser (2.0.5) rubyforge (2.0.4) rubygame (2.6.4) rubygems-update (1.3.7) rubyless (0.7.0, 0.6.0, 0.3.5) rubyntlm (0.1.1) rubyzip2 (2.0.1) scribd_fu (2.0.6) searchlogic (2.4.27, 2.4.23) sequel (3.16.0, 3.15.0, 3.13.0) sexp_processor (3.0.5) shoulda (2.11.3) sinatra (1.0) slim (0.8.0) slim-rails (0.1.2) spreadsheet (0.6.4.1) sqlite3-ruby (1.3.2, 1.3.1) ssl_requirement (0.1.0) subdomain-fu (1.0.0.beta2, 0.5.4) supermodel (0.1.4) syntax (1.0.0) taps (0.3.13, 0.3.11) templater (1.0.0) temple (0.1.6) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.4, 0.14.3, 0.14.1, 0.14.0) tilt (1.1) treetop (1.4.9, 1.4.8) tzinfo (0.3.23) uuidtools (2.1.1, 2.0.0) validates_timeliness (3.0.0.beta.4, 2.3.1) warden (0.10.7) will_paginate (3.0.pre2, 2.3.15, 2.3.14) xml-simple (1.0.12) ya2yaml (0.30) yajl-ruby (0.7.8, 0.7.7) yamltest (0.7.0) zena (0.16.9, 0.16.8) ====== I have ran sudo rvm implode and sudo rm -rf ~/.rvm, so no rvm now. gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /Users/peter/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com === ls -al /usr/local/lib/ total 5704 drwxr-xr-x 7 root wheel 238 Jun 1 2010 . drwxr-xr-x 9 root wheel 306 Dec 15 16:20 .. -rw-r--r-- 1 root wheel 1717208 Jun 1 2010 libruby-static.a -rwxr-xr-x 1 root wheel 1191880 Jun 1 2010 libruby.1.8.7.dylib lrwxrwxrwx 1 root wheel 19 Jun 1 2010 libruby.1.8.dylib -> libruby.1.8.7.dylib lrwxrwxrwx 1 root wheel 19 Jun 1 2010 libruby.dylib -> libruby.1.8.7.dylib drwxr-xr-x 6 root wheel 204 Jun 1 2010 ruby

    Read the article

  • Array help Index out of range exeption was unhandled

    - by Michael Quiles
    I am trying to populate combo boxes from a text file using comma as a delimiter everything was working fine, but now when I debug I get the "Index out of range exeption was unhandled" warning. I guess I need a fresh pair of eyes to see where I went wrong, I commented on the line that gets the error //Fname = fields[1]; using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Printing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace Sullivan_Payroll { public partial class xEmpForm : Form { bool complete = false; public xEmpForm() { InitializeComponent(); } private void xEmpForm_Resize(object sender, EventArgs e) { this.xCenterPanel.Left = Convert.ToInt16((this.Width - this.xCenterPanel.Width) / 2); this.xCenterPanel.Top = Convert.ToInt16((this.Height - this.xCenterPanel.Height) / 2); Refresh(); } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { //Exits the application this.Close(); } private void xEmpForm_FormClosing(object sender, FormClosingEventArgs e) //use this on xtrip calculator { DialogResult Response; if (complete == true) { Application.Exit(); } else { Response = MessageBox.Show("Are you sure you want to Exit?", "Exit", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2); if (Response == DialogResult.No) { complete = false; e.Cancel = true; } else { complete = true; Application.Exit(); } } } private void xEmpForm_Load(object sender, EventArgs e) { //file sources string fileDept = "source\\Department.txt"; string fileSex = "source\\Sex.txt"; string fileStatus = "source\\Status.txt"; if (File.Exists(fileDept)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileDept)) { string dept = ""; while ((dept = sr.ReadLine()) != null) { this.xDeptComboBox.Items.Add(dept); } } } else { MessageBox.Show("The Department file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileSex)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileSex)) { string sex = ""; while ((sex = sr.ReadLine()) != null) { this.xSexComboBox.Items.Add(sex); } } } else { MessageBox.Show("The Sex file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } if (File.Exists(fileStatus)) { using (System.IO.StreamReader sr = System.IO.File.OpenText(fileStatus)) { string status = ""; while ((status = sr.ReadLine()) != null) { this.xStatusComboBox.Items.Add(status); } } } else { MessageBox.Show("The Status file can not be found.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } private void xFileSaveMenuItem_Click(object sender, EventArgs e) { { const string fileNew = "source\\New Staff.txt"; string recordIn; FileStream outFile = new FileStream(fileNew, FileMode.Create, FileAccess.Write); StreamWriter writer = new StreamWriter(outFile); for (int count = 0; count <= this.xEmployeeListBox.Items.Count - 1; count++) { this.xEmployeeListBox.SelectedIndex = count; recordIn = this.xEmployeeListBox.SelectedItem.ToString(); writer.WriteLine(recordIn); } writer.Close(); outFile.Close(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; MessageBox.Show("your file is saved"); } } private void xViewFacultyMenuItem_Click(object sender, EventArgs e) { const string fileStaff = "source\\Staff.txt"; const char DELIM = ','; string Lname, Fname, Depart, Stat, Sex, Salary, cDept, cStat, cSex; double Gtotal; string recordIn; string[] fields; cDept = this.xDeptComboBox.SelectedItem.ToString(); cStat = this.xStatusComboBox.SelectedItem.ToString(); cSex = this.xSexComboBox.SelectedItem.ToString(); FileStream inFile = new FileStream(fileStaff, FileMode.Open, FileAccess.Read); StreamReader reader = new StreamReader(inFile); recordIn = reader.ReadLine(); while (recordIn != null) { fields = recordIn.Split(DELIM); Lname = fields[0]; Fname = fields[1]; // this is where the error appears Depart = fields[2]; Stat = fields[3]; Sex = fields[4]; Salary = fields[5]; Fname = fields[1].TrimStart(null); Depart = fields[2].TrimStart(null); Stat = fields[3].TrimStart(null); Sex = fields[4].TrimStart(null); Salary = fields[5].TrimStart(null); Gtotal = double.Parse(Salary); if (Depart == cDept && cStat == Stat && cSex == Sex) { this.xEmployeeListBox.Items.Add(recordIn); } recordIn = reader.ReadLine(); } reader.Close(); inFile.Close(); if (this.xEmployeeListBox.Items.Count >= 1) { this.xFileSaveMenuItem.Enabled = true; this.xFilePrintMenuItem.Enabled = true; this.xEditClearMenuItem.Enabled = true; } else { this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; MessageBox.Show("Records not found"); } } private void xEditClearMenuItem_Click(object sender, EventArgs e) { this.xEmployeeListBox.Items.Clear(); this.xDeptComboBox.SelectedIndex = -1; this.xStatusComboBox.SelectedIndex = -1; this.xSexComboBox.SelectedIndex = -1; this.xFileSaveMenuItem.Enabled = false; this.xFilePrintMenuItem.Enabled = false; this.xEditClearMenuItem.Enabled = false; } } } Source file -- Anderson, Kristen, Accounting, Assistant, Female, 43155 Ball, Robin, Accounting, Instructor, Female, 42723 Chin, Roger, Accounting, Full, Male,59281 Coats, William, Accounting, Assistant, Male, 45371 Doepke, Cheryl, Accounting, Full, Female, 52105 Downs, Clifton, Accounting, Associate, Male, 46887 Garafano, Karen, Finance, Associate, Female, 49000 Hill, Trevor, Management, Instructor, Male, 38590 Jackson, Carole, Accounting, Instructor, Female, 38781 Jacobson, Andrew, Management, Full, Male, 56281 Lewis, Karl, Management, Associate, Male, 48387 Mack, Kevin, Management, Assistant, Male, 45000 McKaye, Susan, Management, Instructor, Female, 43979 Nelsen, Beth, Finance, Full, Female, 52339 Nelson, Dale, Accounting, Full, Male, 54578 Palermo, Sheryl, Accounting, Associate, Female, 45617 Rais, Mary, Finance, Instructor, Female, 27000 Scheib, Earl, Management, Instructor, Male, 37389 Smith, Tom, Finance, Full, Male, 57167 Smythe, Janice, Management, Associate, Female, 46887 True, David, Accounting, Full, Male, 53181 Young, Jeff, Management, Assistant, Male, 43513

    Read the article

< Previous Page | 103 104 105 106 107 108  | Next Page >