Search Results

Search found 15273 results on 611 pages for 'famous people'.

Page 603/611 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Saturday, March 13, 2010

    CodePlex Daily Summary for Saturday, March 13, 2010New Projects[Experiment] vczh DBUI: EXPERIMENT. Auto UI adaptor for a specified data base[SAMPLE PROJECT] Branch and Patch with GIT: I'll paste more here later...BlogPress: This web application provides a content management system. Coot: Facebook Photos Screensaver.DotNetNuke® JDMenu: dnnJDMenu makes it easy to use the open source JDMenu component in your DotNetNuke skin.DotNetNuke® RadRotator: dnnRadRotator makes it easy to add telerik RadRotator functionality to your site or custom module. Licensing permits anyone to use the components ...DotNetNuke® RadTabStrip: DNNRadTabStrip makes it easy to add telerik RadTabStrip functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® RadTreeView: DNNRadTreeView makes it easy to add telerik RadTreeView functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Garden: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Mark Allan of DnnGarden. Concise and semantic HTML 5 stru...ElmasFC: Elmas Facebook CheckerFront Callback Protocol: Front Callback Protocol make it easier for developers to create a network application in the .Net Language. It provide the flexibility of using TCP...Glass UI: A Windows Forms Control Library built specifically for Aero Glass. This will consist of existing controls, modified to render on glass, as well as ...Guitarist Tools: Some tools for guitarists.HomeUX: HomeUX is home control software featuring a Silverlight-based touch screen user interface.Homework Helper: Homework Helper help students to train simpler homework like "What's the name of the capital of France?" - Question - Answear based. It's made i...IGNOU Mini Project Allocation: A mini project allocation system.Images Compiler: Images Compiler is pretty small wpf application for users who want to resize, rename, disable colors... for too much images in very short time. It...InstitutionManagementSystem: Institution Management System prototypeManagedCv: ManagedCv is the library for C#/VB/ to use the OpenCV librarymanagement joint ownership: Application pour la gestion des actions réalisées au sein d’une copropriété. Application for management actions within a joint ownershipMapWindow6: MapWindow 6.0 is an open source geographic information system (GIS) and library of geospatial software development tools for .NET, written in C#. T...Morris Auto: Morris Auto is a social application build templateMouse control with finger detection: The aim of this project is to design an application for recognition of the movement of a finger through a webcam and then control the mouse. The pr...ORAYLIS BI.Quality: ORAYLIS BI.Quality makes it easier to develop BI Solutions in an agile environment. It adapts Unit Tests to BI Development. Propositional Framework: Different sales propositions are presented to the user using data collected from marketing intelligence, browser history or user behaviour as they ...ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them ...shatkotha web portal: source code of web portal shatkothaSurvey and Registration tools: Two Silverlight tools for add Survey and Registration in your websiteTable Storage Backup & Restore for Windows Azure: Allows various backup & restore options for Windows Azure Table Storage accounts: - Backup from one storage account to another - Backup a storag...Topsy Lib: This project provides wrapper methods to call Topsy's web services from C#.twNowplaying: twNowplaying is a small and handy utility that allows you to tweet your currently playing track from Spotify to Twitter. Followed by the #twNowplay...XOM: XOM: Xml Object Mapper Automatically populate your objects with data from XML via an XML map. Never do this again: if(e.GetElementsByTagName("us...YS Utils: The library contains set of useful classes which may solve common tasks for different applications.ZabViewer: CS ProjectNew ReleasesAppFabric Caching Admin Tool: AppFabric Caching Admin Tool (0.8): System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)ASP.Net Routing Configuration: mal.Web.Routing v0.9.2.0: mal.Web.Routing v0.9.2.0DotNetNuke IM Module of Facebook Like Messenger: DNN IM Module of Facebook Like Messenger: Empower DNN Website with a 1-to-1 Chat Solution named Free Facebook Messenger Style Web Chat Bar of 123 Web Messenger. 1. If you need to use the c...DotNetNuke® Form and List (formerly User Defined Table): 05.01.02: Form and List 05.01.02 (Release Candidate 2)5.1.2 will be the next stabilzation release. Major Highlights fixed Cancel action in a form, it don't ...DotNetNuke® JDMenu: DNN jdMenu 1.0.0: dnnJDMenu makes it easy to use the open source JDMenu component in your DotNetNuke skin. Many thanks to Jonathan Sharp of Outwest Media for creati...DotNetNuke® RadTabStrip: DNNRadTabstrip 1.0.0: DNNRadTabStrip makes it easy to add telerik RadTabStrip functionality to your module or skin. Licensing permits anyone to use the components (inclu...DotNetNuke® RadTreeView: DNNRadTreeView 1.0.0: DNNRadTreeView makes it easy to create skins which use the Telerik RadTreeview functionality. Licensing permits anyone (including designers) to use...DotNetNuke® Skin Garden: Garden Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Mark Allan of DnnGarden. Concise and semantic HTML 5 struc...ElmasFC: Elmas FC: This porgram is checking your facebook account automaticly.Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: Free Download DNN IM Module and Source Code: 123 Web Messenger offers free DotNetNuke Instant Messaging to help you embed a IM Software into DNN Website by integrating 123 Web Messenger with D...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.4 Released: Hi, Today we have released the final version of Visifire v3.0.4 which contains the following major features: * Zooming * Step Line chart ...Home Access Plus+: v3.1.0.0: Version 3.1.0.0 Release Problem with this release, get Version 3.1.1.1 Change Log: Fixed ampersand issue Added Unzip Features Added Help Desk ...Home Access Plus+: v3.1.1.1: Version 3.1.1.1 Release Change Log: Fixed the Help Desk File Changes: ~/App_Data/Tickets.xml ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdbHomework Helper: Homework Helper 1.0: The first release of Homework Helper. It got basic functionality. Check the Documentation or press F1 while using the program.Homework Helper: Homework Helper v1.0: This is the latest version of Homework Helper. Just a few updates since the latest one (a couple of minutes ago). Now it saves your settings! Instr...IBCSharp: IBCSharp 1.02: What IBCSharp 1.02.zip unzips to: http://i39.tinypic.com/28hz8m1.png Note: The above solution has MSTest, Typemock Isolator, and Microsoft CHESS c...InfoService: InfoService v1.5 RC 1: InfoService Release Canidate Please note this is a RC. It should be stable, but i can't guarantee that! So use it on your own risk. Please read Pl...IntX: IntX 0.9.3.3: Fixed issue #7100IronRuby: 1.0 RC3: The IronRuby team is pleased to announce version 1.0 RC3! As IronRuby approaches the final 1.0, these RCs will contain crucial bug fixes and enhanc...MAISGestão: LayerDiagram (pdf): This is the final version of the layer diagramMapWindow6: MapWindow 6.0 msi (March 12): After some significant trouble with svn, this site represents a test of using Mercurial for our version control. Hopefully the choice of this new ...MSBuild Mercurial Tasks: 0.2.0 Beta: This release realises the Scenario 2 and provides two MSBuild tasks: HgCommit and HgPush. This task allows to create a new changeset in the current...NLog - Advanced .NET Logging: NLog 2.0 preview 1: This is very experimental first build of NLog from 2.0 branch is available for download. It’s not alpha, beta or even gamma, just a preview of upco...Open NFe: Fonte DANFE v1.9.5: Código-fonte para a versão 1.9.5 do DANFEOpiConsole (XNA): OpiConsole 0.3: OpiConsole 0.3 OpiConsole includes: * PC & Xbox support. * Default commands (cvar, quit etc.) * Ability to add custom commands. *...ORAYLIS BI.Quality: Release 1.0.0: Release 1.0.0Pcap.Net: Pcap.Net 0.5.0 (38141): Pcap.Net - March 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and include...PhysX.Net: PhysX.Net 0.12.0.0 Beta 1: This is beta release of 0.12.0.0 for people to test and provide feedback on. It targets 2.8.3.21 of PhysXPólya: Pólya 2010 03 13 alpha: Pólya is a collection of generic data structures; as generic as C#/.Net allows them to be. In this first release the focus has been on some fundam...Prolog.NET: Prolog.NET 1.0 Beta 1.3: Installer includes: primary Prolog.NET assembly Prolog.NET Workbench Prolog.NET Scheduler sample application PrologTest console applicatio...Propositional Framework: USP: The initial code drop was based on the Autocomplete demo and the neural network approach used by Jeff Heaton (@ http://www.heatonresearch.com/) - e...Protocol Transition with BizTalk: Source: Stable buildRoTwee: RoTwee (7.1.0.0): Now picture of your friend is shown in RoTwee. Place mouse cursor to the tweet !ServStop: 0.5.0.0: Initial Release Contents of ServStop0500.zip ServStop.exe 0.5.0.0 Initial Release. 32,768 bytes. Other files None. Source code available on "...SkeinLibManaged: Release 1.0.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version...sPWadmin: pwAdmin v0.9a: Added: LiveChat Plugin Shows the latest 150 chat messages from "/Your PW Server/logservice/logs/world2.chat" Refresh chat every 5 seconds Allow...sPWadmin: pwAdmin v0.9b: Some minor fixes to style, layout & different browser support Merged the forms in server configuration tab to a single form that can now save mul...Topsy Lib: Initial release: Initial Debug and Release versions.twNowplaying: twNowplaying: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. What's new This release has some minor UI fixes.twNowplaying: twNowplaying Alpha 1.0: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. Thank you , and enjoy! =)VCC: Latest build, v2.1.30312.0: Automatic drop of latest buildWPF Dialogs: Version 0.1.3: The FolderBrowseDialog / FolderBrowseDialog - Deutsch was improved and extended.XOM: XOM 0.1A: Just a release of the code I have so far. In order to get your objects to work, you need to create an XML file to define your objects. Here is a ...Xpress - ASP.NET MVC 个人博客程序: xpress2.1.1.0312.beta: 最新beta版,注意:此版本和2.1.0不兼容 更改内容: 将主题文件发放在 Views 文件夹下 主题文件支持强类型Model 主题资源文件放在Resouces目录下YS Utils: V 1.0.0.0: This is first release. The YSUtils library contains first set of classes. ZIP files contains documentation.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIpatterns & practices – Enterprise LibraryFarseer Physics EngineSharePoint Team-MailerCaliburn: An Application Framework for WPF and SilverlightCalcium: A modular application toolset leveraging PrismjQuery Library for SharePoint Web Services

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Customize the SimpleMembership in ASP.NET MVC 4.0

    - by thangchung
    As we know, .NET 4.5 have come up to us, and come along with a lot of new interesting features as well. Visual Studio 2012 was also introduced some days ago. They made us feel very happy with cool improvement along with us. Performance when loading code editor is very good at the moment (immediate after click on the solution). I explore some of cool features at these days. Some of them like Json.NET integrated in ASP.NET MVC 4.0, improvement on asynchronous action, new lightweight theme on Visual Studio, supporting very good on mobile development, improvement on authentication… I reviewed them, and found out that in this version of .NET Microsoft was not only developed new feature that suggest from community but also focused on improvement performance of existing features or components. Besides that, they also opened source more projects, like Entity Framework, Reactive Extensions, ASP.NET Web Stack… At the moment, I feel Microsoft want to open source more and more their projects. Today, I am going to dive in deep on new SimpleMembership model. It is really good because in this security model, Microsoft actually focus on development needs. As we know, in the past, they introduce some of provider supplied for coding security like MembershipProvider, RoleProvider… I don’t need to talk but everyone that have ever used it know that they were actually hard to use, and not easy to maintain and unit testing. Why? Because every time you inherit it, you need to override all methods inside it. Some people try to abstract it by introduce more method with virtual keyword, and try to implement basic behavior, so in the subclass we only need to override the method that need for their business. But to me, it’s only the way to work around. ASP.NET team and Web Matrix knew about it, so they built the new features based on existing components on .NET framework. And one of component that comes to us is SimpleMembership and SimpleRole. They implemented the Façade pattern on the top of those, and called it is WebSecurity. In the web, we can call WebSecurity anywhere we want, and make a call to inside wrapper of it. I read a lot of them on web blog, on technical news, on MSDN as well. Matthew Osborn had an excellent article about it at his blog. Jon Galloway had an article like this at here. He analyzed why old membership provider not fixed well to ASP.NET MVC and how to get over it. Those are very good to me. It introduced to me about how to doing SimpleMembership on it, how to doing it on new ASP.NET MVC web application. But one thing, those didn’t tell me was how to doing it on existing security model (that mean we already had Users and Roles on legacy system, and how we can integrate it to this system), that’s a reason I will introduce it today. I have spent couples of hours to see what’s inside this, and try to make one example to clarify my concern. And it’s lucky that I can make it working well.The first thing, we need to create new ASP.NET MVC application on Visual Studio 2012. We need to choose Internet type for this web application. ASP.NET MVC actually creates all needs components for the basic membership and basic role. The cool feature is DoNetOpenAuth come along with it that means we can log-in using facebook, twitter or Windows Live if you want. But it’s only for LocalDb, so we need to change it to fix with existing database model on SQL Server. The next step we have to make SimpleMembership can understand which database we use and show it which column need to point to for the ID and UserName. I really like this feature because SimpleMembership on need to know about the ID and UserName, and they don’t care about rest of it. I assume that we have an existing database model like So we will point it in code like The codes for it, we put on InitializeSimpleMembershipAttribute like [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]     public sealed class InitializeSimpleMembershipAttribute : ActionFilterAttribute     {         private static SimpleMembershipInitializer _initializer;         private static object _initializerLock = new object();         private static bool _isInitialized;         public override void OnActionExecuting(ActionExecutingContext filterContext)         {             // Ensure ASP.NET Simple Membership is initialized only once per app start             LazyInitializer.EnsureInitialized(ref _initializer, ref _isInitialized, ref _initializerLock);         }         private class SimpleMembershipInitializer         {             public SimpleMembershipInitializer()             {                 try                 {                     WebSecurity.InitializeDatabaseConnection("DefaultDb", "User", "Id", "UserName", autoCreateTables: true);                 }                 catch (Exception ex)                 {                     throw new InvalidOperationException("The ASP.NET Simple Membership database could not be initialized. For more information, please see http://go.microsoft.com/fwlink/?LinkId=256588", ex);                 }             }         }     }And decorating it in the AccountController as below [Authorize]     [InitializeSimpleMembership]     public class AccountController : ControllerIn this case, assuming that we need to override the ValidateUser to point this to existing User database table, and validate it. We have to add one more class like public class CustomAdminMembershipProvider : SimpleMembershipProvider     {         // TODO: will do a better way         private const string SELECT_ALL_USER_SCRIPT = "select * from [dbo].[User]private where UserName = '{0}'";         private readonly IEncrypting _encryptor;         private readonly SimpleSecurityContext _simpleSecurityContext;         public CustomAdminMembershipProvider(SimpleSecurityContext simpleSecurityContext)             : this(new Encryptor(), new SimpleSecurityContext("DefaultDb"))         {         }         public CustomAdminMembershipProvider(IEncrypting encryptor, SimpleSecurityContext simpleSecurityContext)         {             _encryptor = encryptor;             _simpleSecurityContext = simpleSecurityContext;         }         public override bool ValidateUser(string username, string password)         {             if (string.IsNullOrEmpty(username))             {                 throw new ArgumentException("Argument cannot be null or empty", "username");             }             if (string.IsNullOrEmpty(password))             {                 throw new ArgumentException("Argument cannot be null or empty", "password");             }             var hash = _encryptor.Encode(password);             using (_simpleSecurityContext)             {                 var users =                     _simpleSecurityContext.Users.SqlQuery(                         string.Format(SELECT_ALL_USER_SCRIPT, username));                 if (users == null && !users.Any())                 {                     return false;                 }                 return users.FirstOrDefault().Password == hash;             }         }     }SimpleSecurityDataContext at here public class SimpleSecurityContext : DbContext     {         public DbSet<User> Users { get; set; }         public SimpleSecurityContext(string connStringName) :             base(connStringName)         {             this.Configuration.LazyLoadingEnabled = true;             this.Configuration.ProxyCreationEnabled = false;         }         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             base.OnModelCreating(modelBuilder);                          modelBuilder.Configurations.Add(new UserMapping());         }     }And Mapping for User as below public class UserMapping : EntityMappingBase<User>     {         public UserMapping()         {             this.Property(x => x.UserName);             this.Property(x => x.DisplayName);             this.Property(x => x.Password);             this.Property(x => x.Email);             this.ToTable("User");         }     }One important thing, you need to modify the web.config to point to our customize SimpleMembership <membership defaultProvider="AdminMemberProvider" userIsOnlineTimeWindow="15">       <providers>         <clear/>         <add name="AdminMemberProvider" type="CIK.News.Web.Infras.Security.CustomAdminMembershipProvider, CIK.News.Web.Infras" />       </providers>     </membership>     <roleManager enabled="false">       <providers>         <clear />         <add name="AdminRoleProvider" type="CIK.News.Web.Infras.Security.AdminRoleProvider, CIK.News.Web.Infras" />       </providers>     </roleManager>The good thing at here is we don’t need to modify the code on AccountController. We only need to modify on SimpleMembership and Simple Role (if need). Now build all solutions, run it. We should see a screen like thisIf I login to Twitter button at the bottom of this page, we will be transfer to twitter authentication pageYou have to waiting for a moment Afterwards it will transfer you back to your admin screenYou can find all source codes at my MSDN code. I will really happy if you guys feel free to put some comments as below. It will be helpful to improvement my code in the future. Thank for all your readings. 

    Read the article

  • Metro: Introduction to CSS 3 Grid Layout

    - by Stephen.Walther
    The purpose of this blog post is to provide you with a quick introduction to the new W3C CSS 3 Grid Layout standard. You can use CSS Grid Layout in Metro style applications written with JavaScript to lay out the content of an HTML page. CSS Grid Layout provides you with all of the benefits of using HTML tables for layout without requiring you to actually use any HTML table elements. Doing Page Layouts without Tables Back in the 1990’s, if you wanted to create a fancy website, then you would use HTML tables for layout. For example, if you wanted to create a standard three-column page layout then you would create an HTML table with three columns like this: <table height="100%"> <tr> <td valign="top" width="300px" bgcolor="red"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </td> <td valign="top" bgcolor="green"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </td> <td valign="top" width="300px" bgcolor="blue"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </td> </tr> </table> When the table above gets rendered out to a browser, you end up with the following three-column layout: The width of the left and right columns is fixed – the width of the middle column expands or contracts depending on the width of the browser. Sometime around the year 2005, everyone decided that using tables for layout was a bad idea. Instead of using tables for layout — it was collectively decided by the spirit of the Web — you should use Cascading Style Sheets instead. Why is using HTML tables for layout bad? Using tables for layout breaks the semantics of the TABLE element. A TABLE element should be used only for displaying tabular information such as train schedules or moon phases. Using tables for layout is bad for accessibility (The Web Content Accessibility Guidelines 1.0 is explicit about this) and using tables for layout is bad for separating content from layout (see http://CSSZenGarden.com). Post 2005, anyone who used HTML tables for layout were encouraged to hold their heads down in shame. That’s all well and good, but the problem with using CSS for layout is that it can be more difficult to work with CSS than HTML tables. For example, to achieve a standard three-column layout, you either need to use absolute positioning or floats. Here’s a three-column layout with floats: <style type="text/css"> #container { min-width: 800px; } #leftColumn { float: left; width: 300px; height: 100%; background-color:red; } #middleColumn { background-color:green; height: 100%; } #rightColumn { float: right; width: 300px; height: 100%; background-color:blue; } </style> <div id="container"> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> </div> The page above contains four DIV elements: a container DIV which contains a leftColumn, middleColumn, and rightColumn DIV. The leftColumn DIV element is floated to the left and the rightColumn DIV element is floated to the right. Notice that the rightColumn DIV appears in the page before the middleColumn DIV – this unintuitive ordering is necessary to get the floats to work correctly (see http://stackoverflow.com/questions/533607/css-three-column-layout-problem). The page above (almost) works with the most recent versions of most browsers. For example, you get the correct three-column layout in both Firefox and Chrome: And the layout mostly works with Internet Explorer 9 except for the fact that for some strange reason the min-width doesn’t work so when you shrink the width of your browser, you can get the following unwanted layout: Notice how the middle column (the green column) bleeds to the left and right. People have solved these issues with more complicated CSS. For example, see: http://matthewjamestaylor.com/blog/holy-grail-no-quirks-mode.htm But, at this point, no one could argue that using CSS is easier or more intuitive than tables. It takes work to get a layout with CSS and we know that we could achieve the same layout more easily using HTML tables. Using CSS Grid Layout CSS Grid Layout is a new W3C standard which provides you with all of the benefits of using HTML tables for layout without the disadvantage of using an HTML TABLE element. In other words, CSS Grid Layout enables you to perform table layouts using pure Cascading Style Sheets. The CSS Grid Layout standard is still in a “Working Draft” state (it is not finalized) and it is located here: http://www.w3.org/TR/css3-grid-layout/ The CSS Grid Layout standard is only supported by Internet Explorer 10 and there are no signs that any browser other than Internet Explorer will support this standard in the near future. This means that it is only practical to take advantage of CSS Grid Layout when building Metro style applications with JavaScript. Here’s how you can create a standard three-column layout using a CSS Grid Layout: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } </style> </head> <body> <div id="container"> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> </div> </body> </html> When the page above is rendered in Internet Explorer 10, you get a standard three-column layout: The page above contains four DIV elements: a container DIV which contains a leftColumn DIV, middleColumn DIV, and rightColumn DIV. The container DIV is set to Grid display mode with the following CSS rule: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } The display property is set to the value “-ms-grid”. This property causes the container DIV to lay out its child elements in a grid. (Notice that you use “-ms-grid” instead of “grid”. The “-ms-“ prefix is used because the CSS Grid Layout standard is still preliminary. This implementation only works with IE10 and it might change before the final release.) The grid columns and rows are defined with the “-ms-grid-columns” and “-ms-grid-rows” properties. The style rule above creates a grid with three columns and one row. The left and right columns are fixed sized at 300 pixels. The middle column sizes automatically depending on the remaining space available. The leftColumn, middleColumn, and rightColumn DIVs are positioned within the container grid element with the following CSS rules: #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } The “-ms-grid-column” property is used to specify the column associated with the element selected by the style sheet selector. The leftColumn DIV is positioned in the first grid column, the middleColumn DIV is positioned in the second grid column, and the rightColumn DIV is positioned in the third grid column. I find using CSS Grid Layout to be just as intuitive as using an HTML table for layout. You define your columns and rows and then you position different elements within these columns and rows. Very straightforward. Creating Multiple Columns and Rows In the previous section, we created a super simple three-column layout. This layout contained only a single row. In this section, let’s create a slightly more complicated layout which contains more than one row: The following page contains a header row, a content row, and a footer row. The content row contains three columns: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } #leftColumn { -ms-grid-column: 1; -ms-grid-row: 2; background-color:red; } #middleColumn { -ms-grid-column: 2; -ms-grid-row: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; -ms-grid-row: 2; background-color:blue; } #footer { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 3; background-color: orange; } </style> </head> <body> <div id="container"> <div id="header"> Header, Header, Header </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="footer"> Footer, Footer, Footer </div> </div> </body> </html> In the page above, the grid layout is created with the following rule which creates a grid with three rows and three columns: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } The header is created with the following rule: #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } The header is positioned in column 1 and row 1. Furthermore, notice that the “-ms-grid-column-span” property is used to span the header across three columns. CSS Grid Layout and Fractional Units When you use CSS Grid Layout, you can take advantage of fractional units. Fractional units provide you with an easy way of dividing up remaining space in a page. Imagine, for example, that you want to create a three-column page layout. You want the size of the first column to be fixed at 200 pixels and you want to divide the remaining space among the remaining three columns. The width of the second column is equal to the combined width of the third and fourth columns. The following CSS rule creates four columns with the desired widths: #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } The fr unit represents a fraction. The grid above contains four columns. The second column is two times the size (2fr) of the third (1fr) and fourth (1fr) columns. When you use the fractional unit, the remaining space is divided up using fractional amounts. Notice that the single row is set to a height of 1fr. The single grid row gobbles up the entire vertical space. Here’s the entire HTML page: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } #firstColumn { -ms-grid-column: 1; background-color:red; } #secondColumn { -ms-grid-column: 2; background-color:green; } #thirdColumn { -ms-grid-column: 3; background-color:blue; } #fourthColumn { -ms-grid-column: 4; background-color:orange; } </style> </head> <body> <div id="container"> <div id="firstColumn"> First Column, First Column, First Column </div> <div id="secondColumn"> Second Column, Second Column, Second Column </div> <div id="thirdColumn"> Third Column, Third Column, Third Column </div> <div id="fourthColumn"> Fourth Column, Fourth Column, Fourth Column </div> </div> </body> </html>   Summary There is more in the CSS 3 Grid Layout standard than discussed in this blog post. My goal was to describe the basics. If you want to learn more than you can read through the entire standard at http://www.w3.org/TR/css3-grid-layout/ In this blog post, I described some of the difficulties that you might encounter when attempting to replace HTML tables with Cascading Style Sheets when laying out a web page. I explained how you can take advantage of the CSS 3 Grid Layout standard to avoid these problems when building Metro style applications using JavaScript. CSS 3 Grid Layout provides you with all of the benefits of using HTML tables for laying out a page without requiring you to use HTML table elements.

    Read the article

  • ASP.NET TextBox TextChanged event not firing in custom EditorPart

    - by Ben Collins
    This is a classic sort of question, I suppose, but it seems that most people are interested in having the textbox cause a postback. I'm not. I just want the event to fire when a postback occurs. I have created a webpart with a custom editorpart. The editorpart renders with a textbox and a button. Clicking the button causes a dialog to open. When the dialog is closed, it sets the value of the textbox via javascript and then does __doPostBack using the ClientID of the editorpart. The postback happens, but the TextChanged event never fires, and I'm not sure if it's a problem with the way __doPostBack is invoked, or if it's because of the way I'm setting up the event handler, or something else. Here's what I think is the relevant portion of the code from the editorpart: protected override void CreateChildControls() { _txtListUrl = new TextBox(); _txtListUrl.ID = "targetSPList"; _txtListUrl.Style.Add(HtmlTextWriterStyle.Width, "60%"); _txtListUrl.ToolTip = "Select List"; _txtListUrl.CssClass = "ms-input"; _txtListUrl.Attributes.Add("readOnly", "true"); _txtListUrl.Attributes.Add("onChange", "__doPostBack('" + this.ClientID + "', '');"); _txtListUrl.Text = this.ListString; _btnListPicker = new HtmlInputButton(); _btnListPicker.Style.Add(HtmlTextWriterStyle.Width, "60%"); _btnListPicker.Attributes.Add("Title", "Select List"); _btnListPicker.ID = "browseListsSmtButton"; _btnListPicker.Attributes.Add("onClick", "mso_launchListSmtPicker()"); _btnListPicker.Value = "Select List"; this.AddConfigurationOption("News List", "Choose the list that serves as the data source.", new Control[] { _txtListUrl, _btnListPicker }); if (this.ShowViewSelection) { _txtListUrl.TextChanged += new EventHandler(_txtListUrl_TextChanged); _ddlViews = new DropDownList(); _ddlViews.ID = "_ddlViews"; this.AddConfigurationOption("View", _ddlViews); } } protected override void OnPreRender(EventArgs e) { ScriptLink.Register(this.Page, "PickerTreeDialog.js", true); string lastSelectedListId = string.Empty; if (!this.WebId.Equals(Guid.Empty) && !this.ListId.Equals(Guid.Empty)) { lastSelectedListId = SPHttpUtility.EcmaScriptStringLiteralEncode( string.Format("SPList:{0}?SPWeb:{1}:", this.ListId.ToString(), this.WebId.ToString())); } string script = "\r\n var lastSelectedListSmtPickerId = '" + lastSelectedListId + "';" + "\r\n function mso_launchListSmtPicker(){" + "\r\n if (!document.getElementById) return;" + "\r\n" + "\r\n var listTextBox = document.getElementById('" + SPHttpUtility.EcmaScriptStringLiteralEncode(_txtListUrl.ClientID) + "');" + "\r\n if (listTextBox == null) return;" + "\r\n" + "\r\n var serverUrl = '" + SPHttpUtility.EcmaScriptStringLiteralEncode(SPContext.Current.Web.ServerRelativeUrl) + "';" + "\r\n" + "\r\n var callback = function(results) {" + "\r\n if (results == null || results[1] == null || results[2] == null) return;" + "\r\n" + "\r\n lastSelectedListSmtPickerId = results[0];" + "\r\n var listUrl = '';" + "\r\n if (listUrl.substring(listUrl.length-1) != '/') listUrl = listUrl + '/';" + "\r\n if (results[1].charAt(0) == '/') results[1] = results[1].substring(1);" + "\r\n listUrl = listUrl + results[1];" + "\r\n if (listUrl.substring(listUrl.length-1) != '/') listUrl = listUrl + '/';" + "\r\n if (results[2].charAt(0) == '/') results[2] = results[2].substring(1);" + "\r\n listUrl = listUrl + results[2];" + "\r\n listTextBox.value = listUrl;" + "\r\n __doPostBack('" + this.ClientID + "','');" + "\r\n }" + "\r\n LaunchPickerTreeDialog('CbqPickerSelectListTitle','CbqPickerSelectListText','websLists','', serverUrl, lastSelectedListSmtPickerId,'','','/_layouts/images/smt_icon.gif','', callback);" + "\r\n }"; this.Page.ClientScript.RegisterClientScriptBlock(typeof(ListPickerEditorPart), "mso_launchListSmtPicker", script, true); if ((!string.IsNullOrEmpty(_txtListUrl.Text) && _ddlViews.Items.Count == 0) || _listSelectionChanged) { _ddlViews.Items.Clear(); if (!string.IsNullOrEmpty(_txtListUrl.Text)) { using (SPWeb web = SPContext.Current.Site.OpenWeb(this.WebId)) { foreach (SPView view in web.Lists[this.ListId].Views) { _ddlViews.Items.Add(new ListItem(view.Title, view.ID.ToString())); } } _ddlViews.Enabled = _ddlViews.Items.Count > 0; } else { _ddlViews.Enabled = false; } } base.OnPreRender(e); } void _txtListUrl_TextChanged(object sender, EventArgs e) { this.SetPropertiesFromChosenListString(_txtListUrl.Text); _listSelectionChanged = true; } Any ideas? Update: I forgot to mention these methods, which are called above: protected virtual void AddConfigurationOption(string title, Control inputControl) { this.AddConfigurationOption(title, null, inputControl); } protected virtual void AddConfigurationOption(string title, string description, Control inputControl) { this.AddConfigurationOption(title, description, new List<Control>(new Control[] { inputControl })); } protected virtual void AddConfigurationOption(string title, string description, IEnumerable<Control> inputControls) { HtmlGenericControl divSectionHead = new HtmlGenericControl("div"); divSectionHead.Attributes.Add("class", "UserSectionHead"); this.Controls.Add(divSectionHead); HtmlGenericControl labTitle = new HtmlGenericControl("label"); labTitle.InnerHtml = HttpUtility.HtmlEncode(title); divSectionHead.Controls.Add(labTitle); HtmlGenericControl divUserSectionBody = new HtmlGenericControl("div"); divUserSectionBody.Attributes.Add("class", "UserSectionBody"); this.Controls.Add(divUserSectionBody); HtmlGenericControl divUserControlGroup = new HtmlGenericControl("div"); divUserControlGroup.Attributes.Add("class", "UserControlGroup"); divUserSectionBody.Controls.Add(divUserControlGroup); if (!string.IsNullOrEmpty(description)) { HtmlGenericControl spnDescription = new HtmlGenericControl("div"); spnDescription.InnerHtml = HttpUtility.HtmlEncode(description); divUserControlGroup.Controls.Add(spnDescription); } foreach (Control inputControl in inputControls) { divUserControlGroup.Controls.Add(inputControl); } this.Controls.Add(divUserControlGroup); HtmlGenericControl divUserDottedLine = new HtmlGenericControl("div"); divUserDottedLine.Attributes.Add("class", "UserDottedLine"); divUserDottedLine.Style.Add(HtmlTextWriterStyle.Width, "100%"); this.Controls.Add(divUserDottedLine); }

    Read the article

  • CodePlex Daily Summary for Thursday, March 18, 2010

    CodePlex Daily Summary for Thursday, March 18, 2010New ProjectsBordecal tools for FxCop: Bordecal tools for FxCop provides an extended framework for FxCop rule development. It allows rule developers to avoid using embedded XML resource...DotNetNuke® Skin City: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by allsnnskins. We integrate orange color and black colour in this ...DotNetNuke® Skin Dawn: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by allsnnskins. This design reflects the theme of daylight. U...DotNetNuke® Skin Dream: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by WhNuke. Uses the DNNJDMenu skin object.DotNetNuke® Skin Expression: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Salar Golestanian of SalarO. This is a pure CSS skin with ...DotNetNuke® Skin ModernBiz: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by allsnnskins. This simple and unaffected company skin uses...DotNetNuke® Skin Profound: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by WhNuke Technology. This skin is simple and clean and the ...DotNetNuke® Skin Technology: A DotNetNuke Design Challenge skin package submitted to the "Modern Standards" category by allsnnskins. It's compatible with common browsers such ...DotNetNuke® Skin Unravel: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Salar Golestanian of SalarO. This is a pure CSS skin wit...E! - ECMAScript Runtime Environment: E! (pronounced E-Bang) is a lightweight runtime environment for editing basic ECMAScript scripts with access to .NET Framework class libraries.Easy ArcGIS Library: Easy ArcGIS Library is a set of C# .net classes that wrap the common functionality of ArcObjects, that help ArcGIS developers do a lot of common fu...File Categorizer: The File Categorizer will help people tag the files on their system for easy searching. Instead of keyword searches, you can find files based on v...GMFS Cosmos: This is a file system for Cosmos a OS that was built with C# and we will be implementing this for windows and linuxIFilter Core Implementation (interface and structures): IFilter C# implementation for you to embed when writing Windows Search capabilities into your application.Image Wall Control for Silverlight: A control for Silverlight that emulates the wall of images in the Zune. imenik_za _dev4fun: imenik is a very simple program and easy to use where you can save and organise your contacts.LegoPhysX: LegoPhysX is an atomic based physics enginePersonal Accounting: Personal system for managing financial accounts, which supports multiple accounts in different currencies. It has movement imputation and basic que...Pipes & Filters Engine: The Pipes & Filters Engine allows you to process a sequence of separate operations (filters) asynchronously in a multi-threaded manner. Filters wil...Prerequisites Checker: Check preqrequisites for software. Example: Software S1 is delivered. S1 has prerequisites PR1, PR2... PRN You may load the config file for S...Puzzle Lib: A library for creating grid-and-tile puzzles. Includes two separate UIs for the Tetriminoes puzzle as examples.QuotesPlugin for Windows Live Writer: The QuotesPlugin for Windows Live Writer lists quotes from web sites such as quotes4all.net. It's very easy for you to select your favourite ones a...RobiJ2se: Robi j2se Learning!SkinEngine: This is a Skin Framework for C# Winform, It use easy.and Create Skin GreatSQL Azure .NET Connection: This is a demo application that shows how to connect with SQL AzureSupermarket Soft: WPF Application that helps you manage your supermarket shoppings.Tally Marks for Windows Phone 7 Series: Tally Marks is a counting application. It can count almost anything you'd like to count, and it does it with tally marks! Count the number of peo...TwitCast: TwitCast is a simple notifier for Twitter using the [url:http://linqtotwitter.codeplex.com/] LINQ 2 Twitter library.WodnySwiat: Projekt grupowy wodny światWSS Task Manager Activity: A custom task creation activity that can be used in a sequential or state machine workflow. The activity was specifically developed to handle task ...New ReleasesAddress Book: Address Book: Address BookAutoAudit: AutoAudit 1.10c: Veresion 1.10 includes most of the bug fix requests. adds createdby and modifiedby columns to the audited base tables. If the user name is set by...blog for umbraco 4: Blog 4 Umbraco 2.0.26: Fixes: -Regex bug in base -Directory urls and rss link bug -Open reader bug -Rss bugDotNetNuke® Skin City: City Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by allsnnskins. We integrate orange color and black colour in this ...DotNetNuke® Skin Dawn: Dawn Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by allsnnskins. This design reflects the theme of daylight. U...DotNetNuke® Skin Dream: Dream Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by WhNuke. Uses the DNNJDMenu skin object.DotNetNuke® Skin Expression: Expression Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Salar Golestanian of SalarO. This is a pure CSS skin with ...DotNetNuke® Skin ModernBiz: ModernBiz Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by allsnnskins. This simple and unaffected company skin uses...DotNetNuke® Skin Profound: Profound Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by WhNuke Technology. This skin is simple and clean and the ...DotNetNuke® Skin Technology: Technology Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Standards" category by allsnnskins. It's compatible with common browsers such a...DotNetNuke® Skin Unravel: Unravel Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Salar Golestanian of SalarO. This is a pure CSS skin with...E! - ECMAScript Runtime Environment: E! beta 1: This is really meant as a learning project for playing with dynamically compiled code, so you'd be better off getting the source code.Easy ArcGIS Library: EAGL Binaries: Easy ArcGIS Library Last Build (Version 1.1.2.4139)Easy ArcGIS Library: EAGL Binaries And Documentation: EAGL Latest Build With DocumentationEasy ArcGIS Library: EAGL Documentation: EAGL 1.1.2.4139 DocumentationEnterprise Library Extensions: Release 1.1: This is a service release for version 1.0 The installation process now works as intended. The assemblies are now visible in the Visual Studio As...Family Tree Analyzer: Version 1.2.1.0: Version 1.2.1.0 Fixed GB radio button not working renamed UK Added fixes for UK regions/shires/counties where country is missing Add country reco...Family Tree Analyzer: Version 1.3.0.0: Version 1.3.0.0 Added IGI Search results viewer Tweaked filenames of IGI search so that results window has more informative displayFile Archive: File Archive: If your computer is only word processing machine or document merge machine, this program is really fit for you. It's so...o useful! This program ar...GameStore League Manager: League Manager 1.0.4: Fixes bug 7434. Changed version number to the standard format of Major.Minor.ReleaseIFilter Core Implementation (interface and structures): Stable release: First release of interface implementation.IFilter Core Implementation (interface and structures): System.Search.Core: Ifilter interface for implementation in your own Search Providers.imenik_za _dev4fun: imenik_aplikacija: imenik aplikacija is an application easy to use where you can save and organise your contacts.KDRE - kernel debugger regular expression extension: KDRE 0.0.2: KDRE - Windbg regexp extension Changes: - amd64 build addedMapWindow6: MapWindow 6.0 msi (March 17): This release introduces some minor tweaks to the source code exposing more buffering functionality. This also fixes a problem with selecting point...MockingBird: MockingBird_2.0_RC: This is the V2.0 RC release. The documentation includes notes about the WCF components. Check this blog post for more details about the release. ...MPF for Projects - Visual Studio 2010: Visual Studio 2010 - Final Release: This contains the source code for the release of MPF for Projects corresponding to Visual Studio 2010. For Beta 2, you will need the Beta 2 release...Physics Helper for Silverlight, WPF, Blend, and Farseer: PhysicsHelper 3.0.0.4 ALPHA: This is an initial release that supports Windows Phone 7 Series Development, along with the Silverlight 3 and WPF support. It requires Visual Studi...Pocket GPW: Pocket GPW 1.2: Modyfikacje wg. change set-a 56678. Poprzednia baza danych (z wersji 1.1) jest zgodna z aktualną. Przed instalacją skopiuj poprzednią bazę danych ...Prerequisites Checker: Prerequisites Checker: Check your software prerequisitesPuzzle Lib: Puzzle Lib examples: Tetriminoes examples using a common Puzzle LIB and common Puzzle Implementation library, demonstrating a basic MVC architecture for game developmentRoTwee: RoTwee (8.0.7.0): Now you can rotate tweets by your hand !SharePoint Icon Integration: SharePoint Icon Integration PDF: This is the first stable release of the SPIconIntegration. To install the PDF Icon integration just start the setup.exe file that you will find in ...SkinEngine: SkinEngine-Src-2010-03-17: this is a release on 2010-03-17Spell Corrector: Spell Corrector 0.2 Binary: Fixed a bug in the word indexing in the database.Spell Corrector: Spell Corrector 0.2 Code: Fixed a bug in the indexing of the words in the database. Now insertion of new words in the database is faster.SQL Azure .NET Connection: LittleBlackBook.NET Release 1.0: This was a demo project for a SQL Azure Presentation at ConfooSQL Server Extended Properties Quick Editor: New release 1.5.5: Whats new: Move preferences to application settings and add a form to edit preferences. Support to add, modify and delete operations could be made ...SuperModel - A Dynamic View-Model Generator: 1.0.0.1 - Tyra+: Resolving a couple of bugs; models generated using INotifyPropertyChanged were not being created correctly. Property resolution on proxied types w...Survey - web survey & form engine: Survey 1.2.0: The Survey 1.2.0 release is based on the original sources of the Nsurvey 1.9 application. Compared to the Survey 1.1.0 version many new features ...T.S.T. the T-SQL Test Tool: Version 1.5: Version 1.5 changes: Bug fix. In V1.4 and earlier table comparison failed if the tables compared had columns with spaces in them.TwitCast: TwitCast 1.0.0.0: First release of TwitCast. Be warned that this is just a development release and there are a lot of things that remain to be done.unbinder: Unbound.dll: from change set ef6f2303dd32VCC: Latest build, v2.1.30317.0: Automatic drop of latest buildWatchersNET.TagCloud: WatchersNET.TagCloud 01.02.00: Whats New Show only Tags from Pages the Current User has View Acess (As Option) A Url can be specified for a Custom tag Added Module Package fo...WSS Task Manager Activity: 1.0: Download either the source for Moss Task Manager Activity, Workflow sample if you are interested to see how to use the activity in the workflow or ...XML pretty print for python (xmlpp): version 0.92b: Fixes issues when element name contains :Xpress - ASP.NET MVC 个人博客程序: xpress2.1.1.0317.beta: 最新beta版 更改内容: 模板与系统所需配置文件移动到App_Data中 Service对象注入到Controller中 Controller对象放入IOC容器中 邮件发送BUG修正Most Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPDirectQOpen Data App Framework (ODAF)patterns & practices – Enterprise LibraryBlogEngine.NETjQuery Library for SharePoint Web ServicesMapWindow6NB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • jQuery, ASP.NET, and Browser History

    - by Stephen Walther
    One objection that people always raise against Ajax applications concerns browser history. Because an Ajax application updates its content by performing sneaky Ajax postbacks, the browser backwards and forwards buttons don’t work as you would normally expect. In a normal, non-Ajax application, when you click the browser back button, you return to a previous state of the application. For example, if you are paging through a set of movie records, you might return to the previous page of records. In an Ajax application, on the other hand, the browser backwards and forwards buttons do not work as you would expect. If you navigate to the second page in a list of records and click the backwards button, you won’t return to the previous page. Most likely, you will end up navigating away from the application entirely (which is very unexpected and irritating). Bookmarking presents a similar problem. You cannot bookmark a particular page of records in an Ajax application because the address bar does not reflect the state of the application. The Ajax Solution There is a solution to both of these problems. To solve both of these problems, you must take matters into your own hands and take responsibility for saving and restoring your application state yourself. Furthermore, you must ensure that the address bar gets updated to reflect the state of your application. In this blog entry, I demonstrate how you can take advantage of a jQuery library named bbq that enables you to control browser history (and make your Ajax application bookmarkable) in a cross-browser compatible way. The JavaScript Libraries In this blog entry, I take advantage of the following four JavaScript files: jQuery-1.4.2.js – The jQuery library. Available from the Microsoft Ajax CDN at http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js jquery.pager.js – Used to generate pager for navigating records. Available from http://plugins.jquery.com/project/Pager microtemplates.js – John Resig’s micro-templating library. Available from http://ejohn.org/blog/javascript-micro-templating/ jquery.ba-bbq.js – The Back Button and Query (BBQ) Library. Available from http://benalman.com/projects/jquery-bbq-plugin/ All of these libraries, with the exception of the Micro-templating library, are available under the MIT open-source license. The Ajax Application Let’s start by building a simple Ajax application that enables you to page through a set of movie database records, 3 records at a time. We’ll use my favorite database named MoviesDB. This database contains a Movies table that looks like this: We’ll create a data model for this database by taking advantage of the ADO.NET Entity Framework. The data model looks like this: Finally, we’ll expose the data to the universe with the help of a WCF Data Service named MovieService.svc. The code for the data service is contained in Listing 1. Listing 1 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } The WCF Data Service in Listing 1 exposes the movies so that you can query the movie database table with URLs that looks like this: http://localhost:2474/MovieService.svc/Movies -- Returns all movies http://localhost:2474/MovieService.svc/Movies?$top=5 – Returns 5 movies The HTML page in Listing 2 enables you to page through the set of movies retrieved from the WCF Data Service. Listing 2 – Original.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Movies with History</title> <link href="Design/Pager.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Page <span id="pageNumber"></span> of <span id="pageCount"></span></h1> <div id="pager"></div> <br style="clear:both" /><br /> <div id="moviesContainer"></div> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> <script src="App_Scripts/jquery.pager.js" type="text/javascript"></script> <script type="text/javascript"> var pageSize = 3, pageIndex = 0; // Show initial page of movies showMovies(); function showMovies() { // Build OData query var query = "/MovieService.svc" // base URL + "/Movies" // top-level resource + "?$skip=" + pageIndex * pageSize // skip records + "&$top=" + pageSize // take records + " &$inlinecount=allpages"; // include total count of movies // Make call to WCF Data Service $.ajax({ dataType: "json", url: query, success: showMoviesComplete }); } function showMoviesComplete(result) { // unwrap results var movies = result["d"]["results"]; var movieCount = result["d"]["__count"] // Show movies using template var showMovie = tmpl("<li><%=Id%> - <%=Title %></li>"); var html = ""; for (var i = 0; i < movies.length; i++) { html += showMovie(movies[i]); } $("#moviesContainer").html(html); // show pager $("#pager").pager({ pagenumber: (pageIndex + 1), pagecount: Math.ceil(movieCount / pageSize), buttonClickCallback: selectPage }); // Update page number and page count $("#pageNumber").text(pageIndex + 1); $("#pageCount").text(movieCount); } function selectPage(pageNumber) { pageIndex = pageNumber - 1; showMovies(); } </script> </body> </html> The page in Listing 3 has the following three functions: showMovies() – Performs an Ajax call against the WCF Data Service to retrieve a page of movies. showMoviesComplete() – When the Ajax call completes successfully, this function displays the movies by using a template. This function also renders the pager user interface. selectPage() – When you select a particular page by clicking on a page number in the pager UI, this function updates the current page index and calls the showMovies() function. Figure 1 illustrates what the page looks like when it is opened in a browser. Figure 1 If you click the page numbers then the browser history is not updated. Clicking the browser forward and backwards buttons won’t move you back and forth in browser history. Furthermore, the address displayed in the address bar does not change when you navigate to different pages. You cannot bookmark any page except for the first page. Adding Browser History The Back Button and Query (bbq) library enables you to add support for browser history and bookmarking to a jQuery application. The bbq library supports two important methods: jQuery.bbq.pushState(object) – Adds state to browser history. jQuery.bbq.getState(key) – Gets state from browser history. The bbq library also supports one important event: hashchange – This event is raised when the part of an address after the hash # is changed. The page in Listing 3 demonstrates how to use the bbq library to add support for browser navigation and bookmarking to an Ajax page. Listing 3 – Default.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Movies with History</title> <link href="Design/Pager.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Page <span id="pageNumber"></span> of <span id="pageCount"></span></h1> <div id="pager"></div> <br style="clear:both" /><br /> <div id="moviesContainer"></div> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/jquery.ba-bbq.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> <script src="App_Scripts/jquery.pager.js" type="text/javascript"></script> <script type="text/javascript"> var pageSize = 3, pageIndex = 0; $(window).bind('hashchange', function (e) { pageIndex = e.getState("pageIndex") || 0; pageIndex = parseInt(pageIndex); showMovies(); }); $(window).trigger('hashchange'); function showMovies() { // Build OData query var query = "/MovieService.svc" // base URL + "/Movies" // top-level resource + "?$skip=" + pageIndex * pageSize // skip records + "&$top=" + pageSize // take records +" &$inlinecount=allpages"; // include total count of movies // Make call to WCF Data Service $.ajax({ dataType: "json", url: query, success: showMoviesComplete }); } function showMoviesComplete(result) { // unwrap results var movies = result["d"]["results"]; var movieCount = result["d"]["__count"] // Show movies using template var showMovie = tmpl("<li><%=Id%> - <%=Title %></li>"); var html = ""; for (var i = 0; i < movies.length; i++) { html += showMovie(movies[i]); } $("#moviesContainer").html(html); // show pager $("#pager").pager({ pagenumber: (pageIndex + 1), pagecount: Math.ceil(movieCount / pageSize), buttonClickCallback: selectPage }); // Update page number and page count $("#pageNumber").text(pageIndex + 1); $("#pageCount").text(movieCount); } function selectPage(pageNumber) { pageIndex = pageNumber - 1; $.bbq.pushState({ pageIndex: pageIndex }); } </script> </body> </html> Notice the first chunk of JavaScript code in Listing 3: $(window).bind('hashchange', function (e) { pageIndex = e.getState("pageIndex") || 0; pageIndex = parseInt(pageIndex); showMovies(); }); $(window).trigger('hashchange'); When the hashchange event occurs, the current pageIndex is retrieved by calling the e.getState() method. The value is returned as a string and the value is cast to an integer by calling the JavaScript parseInt() function. Next, the showMovies() method is called to display the page of movies. The $(window).trigger() method is called to raise the hashchange event so that the initial page of records will be displayed. When you click a page number, the selectPage() method is invoked. This method adds the current page index to the address by calling the following method: $.bbq.pushState({ pageIndex: pageIndex }); For example, if you click on page number 2 then page index 1 is saved to the URL. The URL looks like this: Notice that when you click on page 2 then the browser address is updated to look like: /Default.htm#pageIndex=1 If you click on page 3 then the browser address is updated to look like: /Default.htm#pageIndex=2 Because the browser address is updated when you navigate to a new page number, the browser backwards and forwards button will work to navigate you backwards and forwards through the page numbers. When you click page 2, and click the backwards button, you will navigate back to page 1. Furthermore, you can bookmark a particular page of records. For example, if you bookmark the URL /Default.htm#pageIndex=1 then you will get the second page of records whenever you open the bookmark. Summary You should not avoid building Ajax applications because of worries concerning browser history or bookmarks. By taking advantage of a JavaScript library such as the bbq library, you can make your Ajax applications behave in exactly the same way as a normal web application.

    Read the article

  • CodePlex Daily Summary for Tuesday, January 11, 2011

    CodePlex Daily Summary for Tuesday, January 11, 2011Popular ReleasesArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...VSSpeedster - Parallel Builds for VS: VSSpeedster 1.2 (beta): - Improved Parallel Builds - Cancel running Parallel Build using Ctrl+BreakASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Multiple server ASP.NET Reverse Ajax: This sample project demonstrates how is easy to scale your web applications via PokeInHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 beta Released: Hi, Today Visifire is released along with one new feature * Inlines property has been implemented in Title. Now onwards you can customize the text content in Title. Please check out the Visifire documentation for more information. This release contains fix for the following bugs: * Styles for chart elements were not working as expected. * Bar chart was not drawn properly if AxisMinimum property was set to a value above zero base line. * In DateTime axis, AxisLables were no...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...New Projects.NET Event Spy: Full information available here: http://martincarolan.blogspot.com/2011/01/secret-project.html Simple development/debugging tool that hooks into and monitors events raised on any .NET object3DTweet: 3Dtweet is an effort to make tweets appear in a aesthetic manner to the users of windows phone.Its developed using VS2010 expresss.Agile .NET with SCRUM and XP: Source code for the book Apress Professional Agile .NET Development with SCRUM and XPBeskid Niski Agroturystyka: Travel Poland, turystyka w beskidzie niskim. Agroturystyka w miejscowosci Losie nad zalewem KlimkowkaCoding better: A better coding labs for .net new feature.FBApp: A simple facebook app I was busy with over the holidays as an experiment to try out the facebook api. Is currently not complete but I wanted to get some criticism on it for my 1st web app. It is developed using WPF and C#. Freemium Helper for WebMatrix: The Freemium Helper for WebMatrix provides an easy way to apply the Freemium model into your WebMatrix site. Using different user groups (or roles), it allows you to easily enable or disable features on your pages depending on the stock-keeping unit the user has paid for.Haversine Distance Calculation: A very small project that implements the Haversine formula, which calculates the great circle distance between two points on the earth's surface. The points are latitude / longitude coordinates in DD. The formula is implemented client side with javascript and server side with C#.Hexa.Core: Hexa.Core is our implementation of the Domain Driven Design Architecture. Also providing a set of helper classes for ASP.Net and WCF development.Minecraft NBT reader: A simple Minecraft NBT reader.MobSoft: MobSoft is silverlight based news related application designed to test the new functionalitities in Silverlight 4netduino Helpers: The 'netduino Helpers' is a C# driver set for common hardware components and features convenient wrappers around complex .Net Micro Framework features such as: Analog joysticks, Real-time clock, 8*8 LED matrix, Shift register, runtime assembly & resource loader, bitmaps, etc.NewsGator Social Connectors for Sharepoint 2010: This project contains social connectors for the NewsGator SharePoint platform and supports sending messages to Twitter and LinkedIn just by putting tags in the text #li for sending to linkedin #tw for sending to twitterNon Profit Contact Relationship Management: A non profit contact relationship management software intended to help those in the non profit arena manage donors, sponsors, and prospects.OpenAGE: OpenAGE, short for Open Advance Game Engine, is aimed at developing a new Advanced Game engine strictly for the PC and Xbox360 gaming System using XNA 4.0, and Visual Studio 2010OpenAutoPoster: OpenAutoPoster automates some of the boring everyday tasks of aggregating, linking and posting that haunts content creators.Phefer WoodTurning Sketcher: Draw out your own turnings before you hit the wood. Import images and trace around them, print them out with the length and width measuresments.Simple Script Interpreter- A simple GPLEX/GPPG (Lex/Yacc) Primer: Simple Script is a simple implementation of an interpreter language built with GPlex and Gppg (Lex/Yacc). It's developed in C#.SP2010Tutorials: Code for learning SharePoint 2010The Social Developer: This is a social developer tool for programmers to create and share projects using the .Net framework and other technologies and integrate it into a socialistic approach of sharing the work load and the resources needed to develop high level applications. Traffic-sign Classification: Traffic sign shape classification and localization.unnamedyet: Experimental! para Investigadores de Sistemas. Objetivo! desarrollar una praxis tal que con un conjunto finito y discreto de términos para describir sea posible auto-demostrar y ejecutar cualquier proposición dada.VSSpeedster - Parallel Builds for VS: Improve the performance of your Visual Studio: - Parallel Builds integrated in visual studioWebservice Xslt Transformer WebPart for SharePoint 2010: The Dynamic Webservice Xslt Transformer WebPart makes it much easier for SharePoint Developers and Administrators to call the webservice and transform the results directly to HTML by providing their own custom xslt. The properties can be set on the webpart by using the UI.WilWaNet.HASH: An ASP.NET MVC web site designed for tracking nutrition for the purposes of losing weight. Tracks calories, fat calories, fat grams and saturated fat along with daily weight and exercise. Includes daily Basic Metabolic Rate calculation and graphing functions.WP7 Try it 01: The first try in wp7WPF TryIt 01: Quan ly Nhan khau WPF ApplicationWX Alerter CAP/XML: NWS Alerter using CAP 1.1 alerting protocol. The goal of this project is to consume weather alerts from the NWS site. The user will select the city or SAME code/zone to watch. As alerts trigger notices will display and info will fill the Alert Tab.

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • CodePlex Daily Summary for Saturday, January 01, 2011

    CodePlex Daily Summary for Saturday, January 01, 2011Popular ReleasesBloodSim: BloodSim - 1.3.0.0: - Added tally for number of boss swings and swing avoids - Removed a large number of options that were carried over from Beta and are no longer relevant - Changed stat entry to use Rating format for Dodge, Parry, Haste and Mastery - Rearranged Settings interface - BloodSim will now check for updates on startup and notify the user if a new version is available - Added option to Show/Hide the Simulation Log to increase speed during large simulationsEnhSim: EnhSim 2.2.8 ALPHA: 2.2.8 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Rebuilt Feral Spir...CBM-Command: Version 2.0 Beta 2 - 2010-12-31: This version fixes three major bugs in Version 2.0 Beta 1 Changes(64, 128, Plus/4) Changed the timing code back to version 1.7 because the clock() function in cc65 does not reflect accurate timing. (VIC, PET) The time(NULL) function is not available on these targets and thus had to be removed. Help Hot-Key Fixed - Now when you press either F1 or H you get the help file (if the CBM-Command disk is in the drive you started CBM-Command from). Updated Help File - the new help file by popmi...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...StyleCop Compliant Visual Studio Code Snippets: Visual Studio Code Snippets - January 2011: StyleCop Compliant Visual Studio Code Snippets Visual Studio 2010 provides C# developers with 38 code snippets, enhancing developer productivty and increasing the consistency of the code. Within this project the original code snippets have been refactored to provide StyleCop compliant versions of the original code snippets while also adding many new code snippets. Within the January 2011 release you'll find 82 code snippets to make you more productive and the code you write more consistent!...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.2: Version: 2.0.0.2 (Milestone 2): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...eCompany: eCompany v0.2.0 Build 63: Version 0.2.0 Build 63: Added Splash screen & about box Added downloading of currencies when eCompany launched for the first time (must close any bug caused by no currency rate existing) Added corp creation when eCompany launched for the first time (for now, you didn't need to edit the company.xml file manually) You just need to decompress file "eCompany v0.2.0.63.zip" into your current eCompany install directory.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 8: 1. added truncate table/defrag index/check db functions 2. improved alert 3. fixed problem with alert causing config file corrupted(hopefully)Temporary Data Storage Folder: TDS Folder version 0.2 Beta: In this release following bugs are fixed: 'Send to' entry bug fixed Preferences bug fixedSilverlight File Upload and Download with Interlink: HSS Interlink v.2.1.300: Latest Release 2.1.300 - December 29th 2010 Change Log DownloadFileDialog Modified to support an absolute uri for the DownloadUri property, which is required for OOB support UploadFileDialog Modified to support an absolute uri for the UploadUri property, which is required for OOB support For existing users be sure to uninstall the older version prior to installing this version Note: The demo application is NOT included with the installer but can be reviewed here Stable release and ready...RDPAddins .NET: RDPAddins Alpha 2 (0.2.0.0): Second alpha release... Breaking changes (now this project has 0.2 version): now addin should implement RDPAddins.Common.IAddin and should me exported with RDPAddins.Common.AddinMetadataAtrribute !!!most of all old Addin base class method you can fide in IChannel or IUI interfeces see FileTransferAddin Now RDPAddins.Common.dll just provide interfaces for addin, channel, ui, and export metadata Whole implementation is in RDPAddins.exe RDPAddins.Common.dll has some documentation :) why all this...DocX: DocX v1.0.0.11: Building Examples projectTo build the Examples project, download DocX.dll and add it as a reference to the project. OverviewThis version of DocX contains many bug fixes, it is a serious step towards a stable release. Added1) Unit testing project, 2) Examples project, 3) To many bug fixes to list here, see the source code change list history.Cosmos (C# Open Source Managed Operating System): 71406: This is the second release supporting the full line of Visual Studio 2010 editions. Changes since release 71246 include: Debug info is now stored in a single .cpdb file (which is a Firebird database) Keyboard input works now (using Console.ReadLine) Console colors work (using Console.ForegroundColor and .BackgroundColor)AutoLoL: AutoLoL v1.5.0: Added the all new Masteries Browser which replaces the Quick Open combobox AutoLoL will now attemt to create file associations for mastery (*.lolm) files Each Mastery Build can now contain keywords that the Masteries Browser will use for filtering Changed the way AutoLoL detects if another instance is already running Changed the format of the mastery files to allow more information stored in* Dialogs will now focus the Ok or Cancel button which allows the user to press Return to clo...Paint.NET PSD Plugin: 1.6.0: Handling of layer masks has been greatly improved. Improved reliability. Many PSD files that previously loaded in as garbage will now load in correctly. Parallelized loading. PSD files containing layer masks will load in a bit quicker thanks to the removal of the sequential bottleneck. Hidden layers are no longer made visible on save. Many thanks to the users who helped expose the layer masks problem: Rob Horowitz, M_Lyons10. Please keep sending in those bug reports and PSD repro files!Facebook C# SDK: 4.1.1: From 4.1.1 Release: Authentication bug fix caused by facebook change (error with redirects in Safari) Authenticator fix, always returning true From 4.1.0 Release Lots of bug fixes Removed Dynamic Runtime Language dependencies from non-dynamic platforms. Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample Changed internal serialization to use Json.net BREAKING CHANGE: Canvas Session is no longer supported. Use Signed...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!Euro for Windows XP: ChangeRegionalSettings 1..0: *Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFlickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).New Projects7-Up: PowerShell Scripts for Upgrading SharePoint 2007 to 2010: PowerShell scripts to automate the upgrade of SharePoint 2007 to SharePoint 2010 using a content database or hybrid upgrade approach.Apple Wireless Keyboard: Helper that Allows people use the Apple Wireless (or Wired possibly) Keyboard under Windows 7 without loosing the mac functionalityckTest: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttestCrude - .Net Dependency Management: Crude is light dependency management for .net, there was no dependency management solution for .net as Maven or Ivy until now. DeepTime: Event tacking, management and plotting site.Dev/Test Cloud Platform: Dev/Test Cloud Platform is implemented by Beyondsoft and Microsoft. The platform is dedicated to software development and test scenarios. With Dev/Test Cloud you can save your cost, improve work efficiency, and improve product quality.Image Viewer (wpf version): Image viewer helps windows users to review images on their computer. It is written i Csharp and wpf.LNOne: All common code, framework code, utility code in one.LOGL::GLib: LOGL::GLib is an OpenGL game library written in C++ that allows new C++/OpenGL programmers to focus more on the game and less on the details.MAPI.GUI: Graphical program uses function from mcopyapi.codeplex.com and mdeleteapi.codeplex.com console programs. Program support longPath (above 259 chars and less 32000 chars)Movie Collection Manager: Gerenciador que ajuda a manusear coleção de filmes. Possui uma interface super atraente e simples de usar. Ferramentas e tecnologias usadas: - Visual C# 2010 Express - SQL Server 2008 R2 Express - Windows Forms - Entity Framework OBS: Projeto concorrente do Desafio .NETMyStudioServer.com: Source code for the DotNetNuke http://MyStudioServer.com websiteNusya Tester: A software to create and use various tests with right answers explanations to help one in self-education. It's developed in C#Project Zylaphon: Project Zylaphon is a C# GPL Open Source Computer Aided Music Composition System. Its design goals include highly interactive composition, music computation, and analysis; system control to be provided by an A.I. goal based agenda and quasi Blackboard for maximum flexibility.Remember The Task: RememberTheTask makes it easier to remember what you have been doing the last hour. It's developed in Visual Basic.Rent Payments Scheduling: Register rent schedules and keep track of them.Simple MVVM Toolkit for Silverlight: Simple MVVM Toolkit makes it easier to develop Silverlight applications using the Model-View-ViewModel design pattern.Sparrow.NET HtmlTemplate: Its a template engine for generating web pages.(?????HTML?????,??????html Tag???????html????。)SQLDiagUI: SQLDiagUI provides a GUI for Microsoft SQlDiag Utility which allows users to create a configuration file and start/stop/schedule SQLDiag against a SQL Server.Test Control: testing frameworkTwicko: Simple twitter client.Unity3D Utilities: Unity3D Utilities provides additional functionality to Unity3D (3.0+) via extensions, utilities, mini-frameworks, etc.VBA Composite Controls Object Model: VBA Composite Controls encapsulate complex event-driven interactions between ActiveX controls and other objects in Microsoft Office. It's uses are limited only by the imagination, from binding controls together for updating, to creating unique user interfaces!World of Warcraft Backit up(wow backit up): a project that helps you backup and restore your settings in wow easilyYamma: Yet Another Money Management Application. Build in .NET 4, SQLCE, LINQ and the Entity Framework.

    Read the article

  • 12.04 upgrade broke grub? (not wubi related)

    - by kaare
    I just updated from 11.10 to 12.04, with no major problems (it took a while to get past a request to restart ssh, mysql and some other services, but I did no fiddling by myself, everything was done by the installer). However, after restarting, grub can't do anything. Picking the new linux installation (first entry), I just get error: no such partition error: no such partition error: no such partition and picking the recovery-version just gives 5 lines instead of 3. I have windows 7 installed on a different drive, and can run it by booting from that drive instead. Picking it from the grub menu gives the same error as above (can't remember how many lines, though). I'll be honest and say that I don't remember if win 7 could be booted from grub before the update, though. In short, nothing on the grub menu works. any solutions? The grub menu changed appearance - before it was on a purple background, small letters, now it's white-on-black, big letters, looking very basic. The original installation was from a usb-drive, and I hadn't heard about wubi until I started googling this problem, so I doubt there's any connection. I really hope there are some grub-savvy people out there :) EDIT: ok. so, I made a bootable usb, and am running from that right now. when I ran the bootinfoscript, it warned me that "gawk" could not be found, using "busybox awk" instead. This may lead to unreliable results. just so you know. The contents of RESULTS.txt are: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. => Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sdc. sda1: __________________________________________ File system: vfat Boot sector type: Dell Utility: FAT16 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /DELLBIO.BIN /DELLRMK.BIN /COMMAND.COM sda2: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /bootmgr /ntldr /NTDETECT.COM sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sdb3 and looks at sector 375893584 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb4: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdc1: __________________________________________ File system: ntfs Boot sector type: SYSLINUX 4.06 4.06-pre1 Boot sector info: Syslinux looks at sector 4649656 of /dev/sdc1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. No errors found in the Boot Parameter Block. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 63 240,974 240,912 de Dell Utility /dev/sda2 241,664 21,213,183 20,971,520 7 NTFS / exFAT / HPFS /dev/sda3 * 21,213,184 483,151,863 461,938,680 7 NTFS / exFAT / HPFS /dev/sda4 483,151,872 488,394,751 5,242,880 f W95 Extended (LBA) /dev/sda5 483,153,920 488,394,751 5,240,832 dd Dell Media Direct Drive: sdb _______________________________________ Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 63 345,886,749 345,886,687 7 NTFS / exFAT / HPFS /dev/sdb2 345,888,768 361,510,911 15,622,144 82 Linux swap / Solaris /dev/sdb3 * 361,510,912 390,807,786 29,296,875 83 Linux /dev/sdb4 390,809,600 488,394,751 97,585,152 83 Linux Drive: sdc _______________________________________ Disk /dev/sdc: 8015 MB, 8015282176 bytes 255 heads, 63 sectors/track, 974 cylinders, total 15654848 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 * 2,048 15,652,863 15,650,816 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 07D8-0411 vfat DellUtility /dev/sda2 E2765BBC765B9061 ntfs RECOVERY /dev/sda3 98DC5E54DC5E2D2E ntfs OS /dev/sda5 7061-9DF5 vfat MEDIADIRECT /dev/sdb1 01CBBB4C3374C3B0 ntfs Data1 /dev/sdb2 1ca45f3f-f888-43d1-8137-02699597189a swap /dev/sdb3 6bc1b599-ad4b-403c-a155-a5bc81211f5e ext4 /dev/sdb4 58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 /dev/sdc1 0C02B64402B63316 ntfs PENDRIVE ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sdb4 /media/58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sdc1 /cdrom fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096) ================================ sda5/boot.ini: ================================ [boot loader] timeout=0 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Embedded" /fastdetect /KERNEL=NTOSBOOT.EXE /maxmem=1024 =========================== sdb3/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, with Linux 3.2.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.2.0-24-generic ...' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, with Linux 3.0.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.0.0-19-generic ...' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set=root 98DC5E54DC5E2D2E chainloader +1 } menuentry "Microsoft Windows XP Embedded (on /dev/sda5)" --class windows --class os { insmod part_msdos insmod fat set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root 7061-9DF5 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb3/etc/fstab: ================================ # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb3 during installation UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e / ext4 errors=remount-ro 0 1 # /home was on /dev/sdb4 during installation UUID=58e2b257-8608-4b11-b20b-dc162bb80b62 /home ext4 defaults,user_xattr 0 2 # swap was on /dev/sdb2 during installation UUID=1ca45f3f-f888-43d1-8137-02699597189a none swap sw 0 0 =================== sdb3: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.0.0-19-generic 2 = boot/initrd.img-3.2.0-24-generic 2 = boot/vmlinuz-3.0.0-19-generic 2 = boot/vmlinuz-3.2.0-24-generic 1 = vmlinuz 1 = vmlinuz.old 2 =========================== sdc1/boot/grub/grub.cfg: =========================== if loadfont /boot/grub/font.pf2 ; then set gfxmode=auto insmod efi_gop insmod efi_uga insmod gfxterm terminal_output gfxterm fi set menu_color_normal=white/black set menu_color_highlight=black/light-gray menuentry "Try Ubuntu without installing" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash -- initrd /casper/initrd.lz } menuentry "Install Ubuntu" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper only-ubiquity quiet splash -- initrd /casper/initrd.lz } menuentry "Check disc for defects" { set gfxpayload=keep linux /casper/vmlinuz boot=casper integrity-check quiet splash -- initrd /casper/initrd.lz } ========================= sdc1/syslinux/syslinux.cfg: ========================== # D-I config version 2.0 include menu.cfg default vesamenu.c32 prompt 0 timeout 50 # If you would like to use the new menu and be presented with the option to install or run from USB at startup, remove # from the following line. This line was commented out (by request of many) to allow the old menu to be presented and to enable booting straight into the Live Environment! # ui gfxboot bootlogo =================== sdc1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) ?? = ?? boot/grub/grub.cfg 0 ================= sdc1: Location of files loaded by Syslinux: ================== GiB - GB File Fragment(s) ?? = ?? ldlinux.sys 1 ?? = ?? syslinux/chain.c32 1 ?? = ?? syslinux/gfxboot.c32 1 ?? = ?? syslinux/syslinux.cfg 0 ?? = ?? syslinux/vesamenu.c32 1 ============== sdc1: Version of COM32(R) files used by Syslinux: =============== syslinux/chain.c32 : COM32R module (v4.xx) syslinux/gfxboot.c32 : COM32R module (v4.xx) syslinux/vesamenu.c32 : COM32R module (v4.xx) =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in ./bootinfoscript: line 1646: [: 2.73495e+09: integer expression expected

    Read the article

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • CodePlex Daily Summary for Friday, April 02, 2010

    CodePlex Daily Summary for Friday, April 02, 2010New ProjectsAE.Remoting: An alternative means of remoting for .NET to allow for intuitive usage and easy implementation into existing code.animated-smoke-modeling: This is an implementation or a demo of our method to model animated smokes. ASP.NET Google Maps: Extensible and easy to use, this is ASP.NET Google Maps Control. Drag & Drop and is ready to go. You can configure map style, add a PushPin using t...CartPatches able to see: CartPatches able to see youCodemix Cms: Codemix CmsDo the right thing - The Simple TodoManager: A simple Todo Manager which lets you focus on your daily most important tasks/todos. So do the right thing.....at your home, in your office, in you...Fast Console: Fast Console is a simple xml programming language. This may be a really good starting language as there are printing, variables and as soon as poss...Graphing Calculator in Silverlight: This was initially an effort to port a WPF graphing calculator written by Bob Brown (Microsoft) into Silverlight but soon after it became necessary...InformationVSTS: This application allows you to have all informations on VSTS installed. It also lets you know the server of BUILD and project.La Ranisima: La Ranisima is an open source "Space Invaders" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard. This cross-platfo...La villa del seis: La villa del seis is a multiplatform point-and-click graphical adventure. Also, you can play it like a text adventure (interactive fiction) on a te...LParse: LParse is a monadic parser combinator library, similar to Haskell’s Parsec. It allows you create parsers on C# language. All parsers are first-clas...Manage Recents File/Project VS2005/2008: Clear Recents Files and Projects, and Clear Broken Links of Recents Files and Projects for VS2005 and VS2008. Developed in Visual Studio 2008 SP...Mavention: Mavention makes SharePoint work for you.MixMail: MixMailMixScrum: mixScrumMixTemplate: MixTemplate.NepomucenoBR Regex Learning Tool: This is a simple program designed to help people to study regular expressions.Pruebas: Pruebas is an open source game mix of text adventure and RPG written in Microsoft QBasic (under MS-DOS 6.22) that uses keyboard. Runs natively unde...Python Design by Contract: Simple to use invariants, pre- and postconditions which use some of the new metaprogramming features in Python 3.Rubik Cube's 3D Silverlight 3.0 Animated Solution: Rubik Cube's Silverlight 3.0 Animated Solution is a 3D presentation of Rubik Cube in range of up to 7x7x7 size with full functionality and an anima...Seminarka: Seminarka - ko treba znat šta je zna!SENAC 2010 - Projeto Integrador 2 (Material de Apoio): Material utilizado para apoiar os alunos da disciplina de Projeto integrador 2. O tema são sistemas web utilizando ASP.NET, com C# e banco de da...SENAC CG2010: Contém código apresentado em sala de aula para a disciplina de CG, 5ºBSI NoturnoSistema de facturación: Sistema de facturación desarrollado en C# para la clase de programación 3.SmartFront - WPF and Silverlight Toolkit: SmartFront is a framework piece which allow to quickly building Smart Client application in WPF and in Silverlight. This framework uses existing s...Solar 1: This is the ASP.NET MVC engine based on Oxite and used for 32planets.net.TemporalSQL: SQL Patterns - tables, queries, and functions - to design a temporal database. TFunkOrderSystem: The Funkalistic Blueprint and Items order management systemTribe.Cache: Tribe.Cache is a simple dictionary cache (persistent dictionary) written in C# which is easy to implement and use.tstProject: Testing ProjectUDC indexes parser: UDC (Universal Decimal Classification) indexes parserWebAssert: A test assertion library to assist in writing automated tests against websites. Allows for assertion of HTML validity, etc. Initially has support f...Words Via Subtitle: Words Via Subtitle makes it easier for English Learners to learn new words that appears in TV shows or movies. You'll no longer have to look up the...x5s - a cross site scripting (XSS) testing tool: x5s aims to be a specialized testing tool which assists penetration testers in finding cross-site scripting hot-spots. By auto-injecting token valu...XNA Shooter Engine: The XNA Shooter Engine is a game engine for XNA designed specifically with first-person-shooter-style games in mind. It's being developed for an as...我的开发集: for my study .net csharpNew ReleasesAppFabric Caching Admin Tool: AppFabric Caching Admin Tool 1.1: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x) Note: Must run as Administrator !!!ASP.NET Google Maps: ASP.NET Google Maps 0.1b: Project Description Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map style, add...AutoFixture: Version 1.0.9 (RC1): This is Release Candidate 1 of AutoFixture 1.1. This release contains no known bugs. Compared to AutoFixture 1.0, it fixes some bugs that were dis...Camlex.NET: Camlex.NET 2.0: Camlex.NET 2.0 release New features Search by field id Support for native System.Guid type for values Search by lookup id and lookup value D...CloudCache - Distributed Cache Tier with Azure: v1.0.0.1: New Release on April 1st 2010 No this is not April fools a new release has made it's way out. Below are the changes: Removed dependency on Azure S...DigitallyCreated Utilities: DigitallyCreated Utilities v1.0.1: This release is the v1.0.1 version of DigitallyCreated Utilities. This update is highly recommended for all users of v1.0.0 as it fixes a critical ...Fast Console: Fast Console Alpha: Fast Console is an easy to use and learn programming language. Code example is found in the file TestFile.xml When you've written your code just sa...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.6 beta Released: Hi, This release contains following enhancements. * Zooming feature has been enhanced with the new functionality of ZoomRectangle. Now, users...Graphing Calculator in Silverlight: 1.0.1: Graphing Calculator for Silverlight is written entirely in C# and is based on the Silverlight 3 release. I will soon release the full documentation...Home Access Plus+: v3.2.0.1: v3.2.0.1 Release Change Log: Fixed: Issue with & ampersand File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/Scripts/viewmode.jsIcarus Scene Engine: Icarus Professional 2 Alpha 2 v 1.10.329.913: Alpha release 2 of Icarus Professional. This release includes: IcarusX: The ActiveX-based browser control for rendering IPX projects online. Icaru...Line Counter: 1.5.2: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.2 Added General Code Counter ...ManagedCv: ManagedCv v0.0.0.1: Win32Mavention: Mavention Simple Menu: SharePoint 2010 ships with a menu control that allows you to render a site menu using semantic markup. Using the Mavention Simple Menu you can do t...MDownloader: MDownloader-0.15.10.57200: Fixed uploading.com links detection; Fixed downloading from uploading.com; Fixed downloading from load.to; Fixed detecting incompatible sources;MixMail: V1: MixMailMixTemplate: v1: releaseMvcPager: MvcPager 1.3 for ASP.NET MVC 1.0: MvcPager 1.3 for ASP.NET MVC 1.0 compiled assembly files and demo projectsMvcPager: MvcPager 1.3 for ASP.NET MVC 2.0: MvcPager 1.3 for ASP.NET MVC 2.0 compiled assembly and demo projectsMvcUnity - ASP.NET MVC Dependency Injection: 2.1 Source Code: Drop 2.1 Source CodeNepomucenoBR Regex Learning Tool: NepomucenoBR Regex Learning Tool v0.1 alpha: This is the first version of this application. If you find any bug, please contact me at http://www.nepomucenobr.com.brNepomucenoBR Regex Learning Tool: NepomucenoBR Regex Learning Tool v0.1 source-code: This is the first version of this application. If you find any bug, please contact me at http://www.nepomucenobr.com.brocculo: occulo 0.2 binaries: Release build binaries instead of debug, should now work for other users. Fixed bit rotation and output filename bugs.occulo: occulo 0.2 source: Second source release. See binary release for changes.Python Design by Contract: v0.1: This is the inital release. I think it is working fine.SharePoint Labs: SPLab5002A-FRA-Level200: SPLab5002A-FRA-Level200 This SharePoint Lab will teach you how to modify CAML schema to have IntelliSense on Feature's GUID. Lab Language : French ...SharePoint Labs: SPLab5003A-FRA-Level100: SPLab5003A-FRA-Level100 This SharePoint Lab will teach you how to manually create a Feature, how to brand a Feature and how to incorporate ressourc...SharePoint Labs: SPLab5004A-FRA-Level100: SPLab5004A-FRA-Level100 This SharePoint Lab will teach you how to create a Feature within Visual Studio, how to brand it, how to incorporate ressou...SharePoint Labs: SPLab5005A-FRA-Level100: SPLab5005A-FRA-Level100 This SharePoint Lab will teach you how to create a Feature within Visual Studio, how to brand it, how to incorporate ressou...SSIS ReportGeneratorTask: Version 1.53: Some bugfixes to version 1.52 beta Server Report properties can be displayed. Snapshots can be created. Screenshots of the planned version 1.53 ca...TemporalSQL: April 2010: Initial set of prototypes demonstrating temporal patterns, queries, and functions in SQL ServerTortoiseHg: TortoiseHg 1.0.1: TortoiseHg 1.0.1 is a bug fix release. We recommend all users upgrade to this release. http://bitbucket.org/tortoisehg/stable/wiki/ReleaseNotes#t...Tribe.Cache: Tribe.Cache Alpha: Functional Alpha Release - Do not use in productionTS3QueryLib.Net: TS3QueryLib.Net Version 0.21.15.0: Changelog Added class "ServerListItemBase" which is used in the new method "GetServerListShort" of QueryRunner class. (Change of Beta 21) Added ...UDC indexes parser: Runtime Binary Alpha 1: First alpha versionVisual Studio DSite: Text To Binary (Visual C++ 2008): A simple c program that can convert text to binary. Source code only.x5s - a cross site scripting (XSS) testing tool: x5s 1.0 beta: PLACEHOLDER (coming soon)XNA Shooter Engine: GDK Tools 0.1.0.0: This is a small, very early release of the GDK Tools. The only included tool is Input Map Editor.XPath Visualizer: XPathVisualizer v1.2: Last updated 1 April 2010. This is not a joke! includes new features: Ctrl-S shortcut key for Saving the XML file Ctrl-F shortcut for re-form...すとれおじさん(仮): すとれおじさん β 0.01: とりあえず公開のバージョンです。 中途半端な機能がいっぱいあります。Most Popular ProjectsRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrGraffiti CMSBase Class LibrariesjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationN2 CMSLINQ to TwitterManaged Extensibility FrameworkFarseer Physics Engine

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • ASP.NET MVC & Silverlight development - on Ubuntu

    - by queen3
    I recently moved my working environment from Windows 7 to Ubuntu, and enjoy this every minute of my working day. My work is currently to develop ASP.NET MVC and Silverlight applications. Thus, important Windows stuff is being still run in VirtualBox (such as IIS, MSSQL, Silverlight 3, and legacy COM stuff). For now, I use Visual Studio under VirtualBox as editor/IDE/debugger. But since I prefer Ubuntu fonts and UI much more I'd like to move at least editor (and better IDE) to native Ubuntu. Things I have already done: I store project files in my home folder and run Visual Studio from \vboxsvr share. With few tricks for ASP.NET it works. I use svn on Ubuntu. I test my ASP.NET MVC site using FireFox/FireBug on Ubuntu. What I need on Ubuntu: SQL client to manage MSSQL. I mostly need querying, but I would miss SQL intellisense support. I enjoy command-line svn a lot, but there're times when it's not enough (e.g. view files / check diffs / selectively commit at the sames time) so I wonder if there're any addons - I don't mean replacement for svn, just addons for rare cases like above. I wonder if there're editors that can provide some C# intellisense. Yes I know about MonoDevelop, but will it provide intellisense without compiling (since I'm going to compile remotely in Win box)? And pretty big topic, what's the best way (editor/IDE) to do "folder-based" development? What I mean: The project is /trunk. Everything is there. I don't want to manually add files to project or like that. The project is the folder and files down there. Main task is to edit files of course. I need a quick way to open / search for files in the project. Like in Resharper, I can click Ctrl-Shift-T (IIRC) and just type file name, a list of matching files in the project folder and below is shown. For example, gedit has file tree browser, but I can't quickly type XYZ to find all XYZ files there; moreover it doesn't automatically switch focus to/from editor; so it's more mouse-oriented; I need 100% keyboard way. I need syntax highlighting for C#/HTML/JS. Most importantly, I need HTML tags autocompletion. I can live without it, but I'll be sorry. I need to run compilation remotely (via ssh I think, invoking NAnt script which does MSBuild) and grab results such as errors and warnings, and I'd prefer to quickly go to error line/file. In short, I need to edit/search/open/svn/compile/run files in some folder. Looks like a case for command-line, but imaging I'm in /trunk and want to open file.cs inside /trunk/foo/bar/boo/far, I wouldn't want to type all this path even with bash autocompletion help. I'd prefer to enter :open file.cs, and maybe then select from list of file.cs and file1.cs. Well, maybe I'll add more soon. Actually, I don't have exact requirements; for example, I don't even know if I need to ask for IDE or editor; or should it have svn support integrated or I need to use it from console; do people work with files from console (search/commit/delete/etc) and open editor from there, or they work from editor/IDE and manage files (search/commit/delete) from there? What's better? I have a feeling that vim might have everything I need. I'd like to confirm that, before I spend a lot of time learning it. No I don't want Emacs. Any other IDE? I like Eclipse, but is it good for such stuff? And after all, do you feel like it's a good way to go? I enjoy Ubuntu, enjoy learning new stuff, and Ubuntu + Windows in VirtualBox actually runs faster than Windows7 on my mahcine, but maybe I need to keep development (editor/files management/etc) in Windows/VirtualBox only, leaving other stuff for Ubuntu?

    Read the article

  • Using VLOOKUP in Excel

    - by Mark Virtue
    VLOOKUP is one of Excel’s most useful functions, and it’s also one of the least understood.  In this article, we demystify VLOOKUP by way of a real-life example.  We’ll create a usable Invoice Template for a fictitious company. So what is VLOOKUP?  Well, of course it’s an Excel function.  This article will assume that the reader already has a passing understanding of Excel functions, and can use basic functions such as SUM, AVERAGE, and TODAY.  In its most common usage, VLOOKUP is a database function, meaning that it works with database tables – or more simply, lists of things in an Excel worksheet.  What sort of things?   Well, any sort of thing.  You may have a worksheet that contains a list of employees, or products, or customers, or CDs in your CD collection, or stars in the night sky.  It doesn’t really matter. Here’s an example of a list, or database.  In this case it’s a list of products that our fictitious company sells: Usually lists like this have some sort of unique identifier for each item in the list.  In this case, the unique identifier is in the “Item Code” column.  Note:  For the VLOOKUP function to work with a database/list, that list must have a column containing the unique identifier (or “key”, or “ID”), and that column must be the first column in the table.  Our sample database above satisfies this criterion. The hardest part of using VLOOKUP is understanding exactly what it’s for.  So let’s see if we can get that clear first: VLOOKUP retrieves information from a database/list based on a supplied instance of the unique identifier. Put another way, if you put the VLOOKUP function into a cell and pass it one of the unique identifiers from your database, it will return you one of the pieces of information associated with that unique identifier.  In the example above, you would pass VLOOKUP an item code, and it would return to you either the corresponding item’s description, its price, or its availability (its “In stock” quantity).  Which of these pieces of information will it pass you back?  Well, you get to decide this when you’re creating the formula. If all you need is one piece of information from the database, it would be a lot of trouble to go to to construct a formula with a VLOOKUP function in it.  Typically you would use this sort of functionality in a reusable spreadsheet, such as a template.  Each time someone enters a valid item code, the system would retrieve all the necessary information about the corresponding item. Let’s create an example of this:  An Invoice Template that we can reuse over and over in our fictitious company. First we start Excel… …and we create ourselves a blank invoice: This is how it’s going to work:  The person using the invoice template will fill in a series of item codes in column “A”, and the system will retrieve each item’s description and price, which will be used to calculate the line total for each item (assuming we enter a valid quantity). For the purposes of keeping this example simple, we will locate the product database on a separate sheet in the same workbook: In reality, it’s more likely that the product database would be located in a separate workbook.  It makes little difference to the VLOOKUP function, which doesn’t really care if the database is located on the same sheet, a different sheet, or a completely different workbook. In order to test the VLOOKUP formula we’re about to write, we first enter a valid item code into cell A11: Next, we move the active cell to the cell in which we want information retrieved from the database by VLOOKUP to be stored.  Interestingly, this is the step that most people get wrong.  To explain further:  We are about to create a VLOOKUP formula that will retrieve the description that corresponds to the item code in cell A11.  Where do we want this description put when we get it?  In cell B11, of course.  So that’s where we write the VLOOKUP formula – in cell B11. Select cell B11: We need to locate the list of all available functions that Excel has to offer, so that we can choose VLOOKUP and get some assistance in completing the formula.  This is found by first clicking the Formulas tab, and then clicking Insert Function:   A box appears that allows us to select any of the functions available in Excel.  To find the one we’re looking for, we could type a search term like “lookup” (because the function we’re interested in is a lookup function).  The system would return us a list of all lookup-related functions in Excel.  VLOOKUP is the second one in the list.  Select it an click OK… The Function Arguments box appears, prompting us for all the arguments (or parameters) needed in order to complete the VLOOKUP function.  You can think of this box as the function is asking us the following questions: What unique identifier are you looking up in the database? Where is the database? Which piece of information from the database, associated with the unique identifier, do you wish to have retrieved for you? The first three arguments are shown in bold, indicating that they are mandatory arguments (the VLOOKUP function is incomplete without them and will not return a valid value).  The fourth argument is not bold, meaning that it’s optional:   We will complete the arguments in order, top to bottom. The first argument we need to complete is the Lookup_value argument.  The function needs us to tell it where to find the unique identifier (the item code in this case) that it should be retuning the description of.  We must select the item code we entered earlier (in A11). Click on the selector icon to the right of the first argument: Then click once on the cell containing the item code (A11), and press Enter: The value of “A11” is inserted into the first argument. Now we need to enter a value for the Table_array argument.  In other words, we need to tell VLOOKUP where to find the database/list.  Click on the selector icon next to the second argument: Now locate the database/list and select the entire list – not including the header line.  The database is located on a separate worksheet, so we first click on that worksheet tab: Next we select the entire database, not including the header line: …and press Enter.  The range of cells that represents the database (in this case “’Product Database’!A2:D7”) is entered automatically for us into the second argument. Now we need to enter the third argument, Col_index_num.  We use this argument to specify to VLOOKUP which piece of information from the database, associate with our item code in A11, we wish to have returned to us.  In this particular example, we wish to have the item’s description returned to us.  If you look on the database worksheet, you’ll notice that the “Description” column is the second column in the database.  This means that we must enter a value of “2” into the Col_index_num box: It is important to note that that we are not entering a “2” here because the “Description” column is in the B column on that worksheet.  If the database happened to start in column K of the worksheet, we would still enter a “2” in this field. Finally, we need to decide whether to enter a value into the final VLOOKUP argument, Range_lookup.  This argument requires either a true or false value, or it should be left blank.  When using VLOOKUP with databases (as is true 90% of the time), then the way to decide what to put in this argument can be thought of as follows: If the first column of the database (the column that contains the unique identifiers) is sorted alphabetically/numerically in ascending order, then it’s possible to enter a value of true into this argument, or leave it blank. If the first column of the database is not sorted, or it’s sorted in descending order, then you must enter a value of false into this argument As the first column of our database is not sorted, we enter false into this argument: That’s it!  We’ve entered all the information required for VLOOKUP to return the value we need.  Click the OK button and notice that the description corresponding to item code “R99245” has been correctly entered into cell B11: The formula that was created for us looks like this: If we enter a different item code into cell A11, we will begin to see the power of the VLOOKUP function:  The description cell changes to match the new item code: We can perform a similar set of steps to get the item’s price returned into cell E11.  Note that the new formula must be created in cell E11.  The result will look like this: …and the formula will look like this: Note that the only difference between the two formulae is the third argument (Col_index_num) has changed from a “2” to a “3” (because we want data retrieved from the 3rd column in the database). If we decided to buy 2 of these items, we would enter a “2” into cell D11.  We would then enter a simple formula into cell F11 to get the line total: =D11*E11 …which looks like this… Completing the Invoice Template We’ve learned a lot about VLOOKUP so far.  In fact, we’ve learned all we’re going to learn in this article.  It’s important to note that VLOOKUP can be used in other circumstances besides databases.  This is less common, and may be covered in future How-To Geek articles. Our invoice template is not yet complete.  In order to complete it, we would do the following: We would remove the sample item code from cell A11 and the “2” from cell D11.  This will cause our newly created VLOOKUP formulae to display error messages: We can remedy this by judicious use of Excel’s IF() and ISBLANK() functions.  We change our formula from this…       =VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) We would copy the formulas in cells B11, E11 and F11 down to the remainder of the item rows of the invoice.  Note that if we do this, the resulting formulas will no longer correctly refer to the database table.  We could fix this by changing the cell references for the database to absolute cell references.  Alternatively – and even better – we could create a range name for the entire product database (such as “Products”), and use this range name instead of the cell references.  The formula would change from this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,Products,2,FALSE)) …and then copy the formulas down to the rest of the invoice item rows. We would probably “lock” the cells that contain our formulae (or rather unlock the other cells), and then protect the worksheet, in order to ensure that our carefully constructed formulae are not accidentally overwritten when someone comes to fill in the invoice. We would save the file as a template, so that it could be reused by everyone in our company If we were feeling really clever, we would create a database of all our customers in another worksheet, and then use the customer ID entered in cell F5 to automatically fill in the customer’s name and address in cells B6, B7 and B8. If you would like to practice with VLOOKUP, or simply see our resulting Invoice Template, it can be downloaded from here. Similar Articles Productive Geek Tips Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 FormatImport Microsoft Access Data Into ExcelChange the Default Font in Excel 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • Auto blocking attacking IP address

    - by dong
    This is to share my PowerShell code online. I original asked this question on MSDN forum (or TechNet?) here: http://social.technet.microsoft.com/Forums/en-US/winserversecurity/thread/f950686e-e3f8-4cf2-b8ec-2685c1ed7a77 In short, this is trying to find attacking IP address then add it into Firewall block rule. So I suppose: 1, You are running a Windows Server 2008 facing the Internet. 2, You need to have some port open for service, e.g. TCP 21 for FTP; TCP 3389 for Remote Desktop. You can see in my code I’m only dealing with these two since that’s what I opened. You can add further port number if you like, but the way to process might be different with these two. 3, I strongly suggest you use STRONG password and follow all security best practices, this ps1 code is NOT for adding security to your server, but reduce the nuisance from brute force attack, and make sys admin’s life easier: i.e. your FTP log won’t hold megabytes of nonsense, your Windows system log will not roll back and only can tell you what happened last month. 4, You are comfortable with setting up Windows Firewall rules, in my code, my rule has a name of “MY BLACKLIST”, you need to setup a similar one, and set it to BLOCK everything. 5, My rule is dangerous because it has the risk to block myself out as well. I do have a backup plan i.e. the DELL DRAC5 so that if that happens, I still can remote console to my server and reset the firewall. 6, By no means the code is perfect, the coding style, the use of PowerShell skills, the hard coded part, all can be improved, it’s just that it’s good enough for me already. It has been running on my server for more than 7 MONTHS. 7, Current code still has problem, I didn’t solve it yet, further on this point after the code. :)    #Dong Xie, March 2012  #my simple code to monitor attack and deal with it  #Windows Server 2008 Logon Type  #8: NetworkCleartext, i.e. FTP  #10: RemoteInteractive, i.e. RDP    $tick = 0;  "Start to run at: " + (get-date);    $regex1 = [regex] "192\.168\.100\.(?:101|102):3389\s+(\d+\.\d+\.\d+\.\d+)";  $regex2 = [regex] "Source Network Address:\t(\d+\.\d+\.\d+\.\d+)";    while($True) {   $blacklist = @();     "Running... (tick:" + $tick + ")"; $tick+=1;    #Port 3389  $a = @()  netstat -no | Select-String ":3389" | ? { $m = $regex1.Match($_); `    $ip = $m.Groups[1].Value; if ($m.Success -and $ip -ne "10.0.0.1") {$a = $a + $ip;} }  if ($a.count -gt 0) {    $ips = get-eventlog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+10"} | foreach { `      $m = $regex2.Match($_.Message); $ip = $m.Groups[1].Value; $ip; } | Sort-Object | Tee-Object -Variable list | Get-Unique    foreach ($ip in $a) { if ($ips -contains $ip) {      if (-not ($blacklist -contains $ip)) {        $attack_count = ($list | Select-String $ip -SimpleMatch | Measure-Object).count;        "Found attacking IP on 3389: " + $ip + ", with count: " + $attack_count;        if ($attack_count -ge 20) {$blacklist = $blacklist + $ip;}      }      }    }  }      #FTP  $now = (Get-Date).AddMinutes(-5); #check only last 5 mins.     #Get-EventLog has built-in switch for EventID, Message, Time, etc. but using any of these it will be VERY slow.  $count = (Get-EventLog Security -Newest 1000 | Where-Object {$_.EventID -eq 4625 -and $_.Message -match "Logon Type:\s+8" -and `              $_.TimeGenerated.CompareTo($now) -gt 0} | Measure-Object).count;  if ($count -gt 50) #threshold  {     $ips = @();     $ips1 = dir "C:\inetpub\logs\LogFiles\FPTSVC2" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;       $ips2 = dir "C:\inetpub\logs\LogFiles\FTPSVC3" | Sort-Object -Property LastWriteTime -Descending `       | select -First 1 | gc | select -Last 200 | where {$_ -match "An\+error\+occured\+during\+the\+authentication\+process."} `        | Select-String -Pattern "(\d+\.\d+\.\d+\.\d+)" | select -ExpandProperty Matches | select -ExpandProperty value | Group-Object `        | where {$_.Count -ge 10} | select -ExpandProperty Name;     $ips += $ips1; $ips += $ips2; $ips = $ips | where {$_ -ne "10.0.0.1"} | Sort-Object | Get-Unique;         foreach ($ip in $ips) {       if (-not ($blacklist -contains $ip)) {        "Found attacking IP on FTP: " + $ip;        $blacklist = $blacklist + $ip;       }     }  }        #Firewall change <# $current = (netsh advfirewall firewall show rule name="MY BLACKLIST" | where {$_ -match "RemoteIP"}).replace("RemoteIP:", "").replace(" ","").replace("/255.255.255.255",""); #inside $current there is no \r or \n need remove. foreach ($ip in $blacklist) { if (-not ($current -match $ip) -and -not ($ip -like "10.0.0.*")) {"Adding this IP into firewall blocklist: " + $ip; $c= 'netsh advfirewall firewall set rule name="MY BLACKLIST" new RemoteIP="{0},{1}"' -f $ip, $current; Invoke-Expression $c; } } #>    foreach ($ip in $blacklist) {    $fw=New-object –comObject HNetCfg.FwPolicy2; # http://blogs.technet.com/b/jamesone/archive/2009/02/18/how-to-manage-the-windows-firewall-settings-with-powershell.aspx    $myrule = $fw.Rules | where {$_.Name -eq "MY BLACKLIST"} | select -First 1; # Potential bug here?    if (-not ($myrule.RemoteAddresses -match $ip) -and -not ($ip -like "10.0.0.*"))      {"Adding this IP into firewall blocklist: " + $ip;         $myrule.RemoteAddresses+=(","+$ip);      }  }    Wait-Event -Timeout 30 #pause 30 secs    } # end of top while loop.   Further points: 1, I suppose the server is listening on port 3389 on server IP: 192.168.100.101 and 192.168.100.102, you need to replace that with your real IP. 2, I suppose you are Remote Desktop to this server from a workstation with IP: 10.0.0.1. Please replace as well. 3, The threshold for 3389 attack is 20, you don’t want to block yourself just because you typed your password wrong 3 times, you can change this threshold by your own reasoning. 4, FTP is checking the log for attack only to the last 5 mins, you can change that as well. 5, I suppose the server is serving FTP on both IP address and their LOG path are C:\inetpub\logs\LogFiles\FPTSVC2 and C:\inetpub\logs\LogFiles\FPTSVC3. Change accordingly. 6, FTP checking code is only asking for the last 200 lines of log, and the threshold is 10, change as you wish. 7, the code runs in a loop, you can set the loop time at the last line. To run this code, copy and paste to your editor, finish all the editing, get it to your server, and open an CMD window, then type powershell.exe –file your_powershell_file_name.ps1, it will start running, you can Ctrl-C to break it. This is what you see when it’s running: This is when it detected attack and adding the firewall rule: Regarding the design of the code: 1, There are many ways you can detect the attack, but to add an IP into a block rule is no small thing, you need to think hard before doing it, reason for that may include: You don’t want block yourself; and not blocking your customer/user, i.e. the good guy. 2, Thus for each service/port, I double check. For 3389, first it needs to show in netstat.exe, then the Event log; for FTP, first check the Event log, then the FTP log files. 3, At three places I need to make sure I’m not adding myself into the block rule. –ne with single IP, –like with subnet.   Now the final bit: 1, The code will stop working after a while (depends on how busy you are attacked, could be weeks, months, or days?!) It will throw Red error message in CMD, don’t Panic, it does no harm, but it also no longer blocking new attack. THE REASON is not confirmed with MS people: the COM object to manage firewall, you can only give it a list of IP addresses to the length of around 32KB I think, once it reaches the limit, you get the error message. 2, This is in fact my second solution to use the COM object, the first solution is still in the comment block for your reference, which is using netsh, that fails because being run from CMD, you can only throw it a list of IP to 8KB. 3, I haven’t worked the workaround yet, some ideas include: wrap that RemoteAddresses setting line with error checking and once it reaches the limit, use the newly detected IP to be the list, not appending to it. This basically reset your block rule to ground zero and lose the previous bad IPs. This does no harm as it sounds, because given a certain period has passed, any these bad IPs still not repent and continue the attack to you, it only got 30 seconds or 20 guesses of your password before you block it again. And there is the benefit that the bad IP may turn back to the good hands again, and you are not blocking a potential customer or your CEO’s home pc because once upon a time, it’s a zombie. Thus the ZEN of blocking: never block any IP for too long. 4, But if you insist to block the ugly forever, my other ideas include: You call MS support, ask them how can we set an arbitrary length of IP addresses in a rule; at least from my experiences at the Forum, they don’t know and they don’t care, because they think the dynamic blocking should be done by some expensive hardware. Or, from programming perspective, you can create a new rule once the old is full, then you’ll have MY BLACKLIST1, MY  BLACKLIST2, MY BLACKLIST3, … etc. Once in a while you can compile them together and start a business to sell your blacklist on the market! Enjoy the code! p.s. (PowerShell is REALLY REALLY GREAT!)

    Read the article

  • Databind gridview with LINQ

    - by Anders Svensson
    I have two database tables, one for Users of a web site, containing the fields "UserID", "Name" and the foreign key "PageID". And the other with the fields "PageID" (here the primary key), and "Url". I want to be able to show the data in a gridview with data from both tables, and I'd like to do it with databinding in the aspx page. I'm not sure how to do this, though, and I can't find any good examples of this particular situation. Here's what I have so far: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="LinqBinding._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Testing LINQ </h2> <asp:GridView ID="GridView1" runat="server" DataSourceID="LinqDataSourceUsers" AutoGenerateColumns="false"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="UserID" HeaderText="UserID" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="PageID" HeaderText="PageID" /> <asp:TemplateField HeaderText="Pages"> <ItemTemplate <asp:DropDownList ID="DropDownList1" DataSourceID="LinqDataSourcePages" SelectedValue='<%#Bind("PageID") %>' DataTextField="Url" DataValueField="PageID" runat="server"> </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:LinqDataSource ID="LinqDataSourcePages" runat="server" ContextTypeName="LinqBinding.UserDataContext" EntityTypeName="" TableName="Pages"> </asp:LinqDataSource> <asp:LinqDataSource ID="LinqDataSourceUsers" runat="server" ContextTypeName="LinqBinding.UserDataContext" EntityTypeName="" TableName="Users"> </asp:LinqDataSource> </asp:Content> But this only works in so far as it gets the user table into the gridview (that's not a problem), and I get the page data into the dropdown, but here's the problem: I of course get ALL the page data in there, not just the pages for each user on each row. So how do I put some sort of "where" constraint on dropdown for each row to only show the pages for the user in that row? (Also, to be honest I'm not sure I'm getting the foreign key relationship right, because I'm not too used to working with relationships). EDIT: I think I have set up the relationship incorrectly. I keep getting the message that "Pages" doesn't exist as a property on the User object. And I guess it can't since the relationship right now is one way. So I tried to create a many-to-many relationship. Again, my database knowledge is a bit limited, but I added a so called "junction table" with the fields UserID and PageID, same as the other tables' primary keys. I wasn't able to make both of these primary keys in the junction table though (which it looked like some people had in examples I've seen...but since it wasn't possible I guessed they shouldn't be). Anyway, I created a relationship from each table and created new LINQ classes from that. But then what do I do? I set the junction table as the Linq data source, since I guessed I had to do this to access both tables, but that doesn't work. Then it complains there is no Name property on that object. So how do I access the related tables? Here's what I have now with the many-to-many relationship: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ManyToMany._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Many to many LINQ </h2> <asp:GridView ID="GridView1" runat="server" DataSourceID="LinqDataSource1" AutoGenerateColumns="false"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="UserID" HeaderText="UserID" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="PageID" HeaderText="PageID" /> <asp:TemplateField HeaderText="Pages"> <ItemTemplate> <asp:DropDownList ID="DropDownList1" DataSource='<%#Eval("Pages") %>' SelectedValue='<%#Bind("PageID") %>' DataTextField="Url" DataValueField="PageID" runat="server"> </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:LinqDataSource ID="LinqDataSource1" runat="server" ContextTypeName="ManyToMany.UserPageDataContext" EntityTypeName="" TableName="UserPages"> </asp:LinqDataSource> </asp:Content>

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >