Search Results

Search found 6055 results on 243 pages for 'forms'.

Page 126/243 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Integrate Nitro PDF Reader with Windows 7

    - by Matthew Guay
    Would you like a lightweight PDF reader that integrates nicely with Office and Windows 7?  Here we look at the new Nitro PDF Reader, a nice PDF viewer that also lets you create and markup PDF files. Adobe Reader is the de-facto PDF viewer, but it only lets you view PDFs and not much else.  Additionally, it doesn’t fully integrate with 64-bit editions of Vista and Windows 7.  There are many alternate PDF readers, but Nitro PDF Reader is a new entry into this field that offers more features than most PDF readers.  From the creators of the popular free PrimoPDF printer, the new Reader lets you create PDFs from a variety of file formats and markup existing PDFs with notes, highlights, stamps, and more in addition to viewing PDFs.  It also integrates great with Windows 7 using the Office 2010 ribbon interface. Getting Started Download the free Nitro PDF Reader (link below) and install as normal.  Nitro PDF Reader has separate versions for 32 & 64-bit editions of Windows, so download the correct one for your computer. Note:  Nitro PDF Reader is still in Beta testing, so only install if you’re comfortable with using beta software. On first run, Nitro PDF Reader will ask if you want to make it the default PDF viewer.  If you don’t want to, make sure to uncheck the box beside Always perform this check to keep it from opening this prompt every time you use it. It will also open an introductory PDF the first time you run it so you can quickly get acquainted with its features. Windows 7 Integration One of the first things you’ll notice is that Nitro PDF Reader integrates great with Windows 7.  The ribbon interface fits right in with native applications such as WordPad and Paint, as well as Office 2010. If you set Nitro PDF Reader as your default PDF viewer, you’ll see thumbnails of your PDFs in Windows Explorer. If you turn on the Preview Pane, you can read full PDFs in Windows Explorer.  Adobe Reader lets you do this in 32 bit versions, but Nitro PDF works in 64 bit versions too. The PDF preview even works in Outlook.  If you receive an email with a PDF attachment, you can select the PDF and view it directly in the Reading Pane.  Click the Preview file button, and you can uncheck the box at the bottom so PDFs will automatically open for preview if you want.   Now you can read your PDF attachments in Outlook without opening them separately.  This works in both Outlook 2007 and 2010. Edit your PDFs Adobe Reader only lets you view PDF files, and you can’t save data you enter in PDF forms.  Nitro PDF Reader, however, gives you several handy markup tools you can use to edit your PDFs.  When you’re done, you can save the final PDF, including information entered into forms. With the ribbon interface, it’s easy to find the tools you want to edit your PDFs. Here we’ve highlighted text in a PDF and added a note to it.  We can now save these changes, and they’ll look the same in any PDF reader, including Adobe Reader. You can also enter new text in PDFs.  This will open a new tab in the ribbon, where you can select basic font settings.  Select the Click To Finish button in the ribbon when you’re finished editing text.   Or, if you want to use the text or pictures from a PDF in another application, you can choose to extract them directly in Nitro PDF Reader.  Create PDFs One of the best features of Nitro PDF Reader is the ability to create PDFs from almost any file.  Nitro adds a new virtual printer to your computer that creates PDF files from anything you can print.  Print your file as normal, but select the Nitro PDF Creator (Reader) printer. Enter a name for your PDF, select if you want to edit the PDF properties, and click Create. If you choose to edit the PDF properties, you can add your name and information to the file, select the initial view, encrypt it, and restrict permissions. Alternately, you can create a PDF from almost any file by simply drag-and-dropping it into Nitro PDF Reader.  It will automatically convert the file to PDF and open it in a new tab in Nitro PDF. Now from the File menu you can send the PDF as an email attachment so anyone can view it. Make sure to save the PDF before closing Nitro, as it does not automatically save the PDF file.   Conclusion Nitro PDF Reader is a nice alternative to Adobe Reader, and offers some features that are only available in the more expensive Adobe Acrobat.  With great Windows 7 integration, including full support for 64-bit editions, Nitro fits in with the Windows and Office experience very nicely.  If you have tried out Nitro PDF Reader leave a comment and let us know what you think. Link Download Nitro PDF Reader Similar Articles Productive Geek Tips Install Adobe PDF Reader on Ubuntu EdgySubscribe to RSS Feeds in Chrome with a Single ClickChange Default Feed Reader in FirefoxFix for Windows Explorer Folder Pane in XP Becomes Grayed OutRemove "Please wait while the document is being prepared for reading" Message in Adobe Reader 8 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes

    Read the article

  • Moving from Winforms to WPF

    - by Elmex
    I am a long time experienced Windows Forms developer, but now it's time to move to WPF because a new WPF project is comming soon to me and I have only a short lead time to prepare myself to learn WPF. What is the best way for a experienced Winforms devleoper? Can you give me some hints and recommendations to learn WPF in a very short time! Are there simple sample WPF solutions and short (video) tutorials? Which books do you recommend? Is www.windowsclient.net a good starting point? Are there alternatives to the official Microsoft site? Thanks in advance for your help!

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that wed be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 youll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Multi-tenancy - single database vs multiple database

    - by RichardW1001
    We have a number of clients, whose systems share some functionality, but also have quite a degree of diversity. The number of clients is growing - always a healthy thing! - and the diversity between their businesses is also increasing. At present there is a single ASP.Net (Web Forms) Web Site (as opposed to web project), which has sub-folders for each tenant, with that tenant's non-standard pages. There is a separate model project, which deals with database access and business logic. Which is preferable - and most importantly, why - between having (a) 1 database per client, with only the features associated with that client; or (b) a single database shared by all clients, where only a subset of tables are used by any one client. The main concerns within the business are over: maintenance of multiple assets - backups, version control and the like promoting re-use as much as possible How would you ensure these concerns are addressed, which solution is preferable, and why? (I have been also compiling responses to similar questions)

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • Moving from Winforms to WPF

    - by Elmex
    I am a long time experienced Windows Forms developer, but now it's time to move to WPF because a new WPF project is comming soon to me and I have only a short lead time to prepare myself to learn WPF. What is the best way for a experienced Winforms devleoper? Can you give me some hints and recommendations to learn WPF in a very short time! Are there simple sample WPF solutions and short (video) tutorials? Which books do you recommend? Is www.windowsclient.net a good starting point? Are there alternatives to the official Microsoft site? Thanks in advance for your help!

    Read the article

  • Telerik Reporting introduces WPF Report Viewer, the first reporting tool supporting all .NET desktop

    With the release of Telerik Reporting Q1 2010 Service Pack 1 we are proud to announce a very important addition to Telerik Reporting. Finally, our suite of report viewers is now complete, making Telerik Reporting the first reporting tool to support all .NET desktop and web platforms: ASP.NET, Silverlight (incl. out-of-browser support), Windows Forms, and WPF. The newest member of the viewer family is the WPF Report Viewer which allows developers to deliver reports produced by Telerik Reporting to any rich application developed with WPF. The viewer supports all functionality available in the ASP.NET, Silverlight and Win Viewers,  including printing and exporting to all supported formats. Here is a quick overview of the most important features (in WPF disguise): Different technology, same report The WPF Viewer is tightly integrated with Telerik Reporting and as such uses the same powerful reporting engine, which guarantees that you will ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • RSS Feeds currently on Simple-Talk

    - by Andrew Clarke
    There are a number of news-feeds for the Simple-Talk site, but for some reason they are well hidden. Whilst we set about reorganizing them, I thought it would be a good idea to list some of the more important ones. The most important one for almost all purposes is the Homepage RSS feed which represents the blogs and articles that are placed on the homepage. Main Site Feed representing the Homepage ..which is good for most purposes but won't always have all the blogs, or maybe it will occasionally miss an article. If you aren't interested in all the content, you can just use the RSS feeds that are more relevant to your interests. (We'll be increasing these categories soon) The newsfeed for SQL articles The .NET section newsfeed The newsfeed for Red Gate books The newsfeed for Opinion articles The SysAdmin section newsfeed if you want to get a more refined feed, then you can pick and choose from these feeds for each category so as to make up your custom news-feed in the SQL section, SQL Training Learn SQL Server Database Administration TSQL Programming SQL Server Performance Backup and Recovery SQL Tools SSIS SSRS (Reporting Services) in .NET there are... ASP.NET Windows Forms .NET Framework ,NET Performance Visual Studio .NET tools in Sysadmin there are Exchange General Virtualisation Unified Messaging Powershell in opinion, there is... Geek of the Week Opinion Pieces in Books, there is .NET Books SQL Books SysAdmin Books And all the blogs have got feeds. So although you can get all the blogs from here.. Main Blog Feed          You can get individual RSS feeds.. AdamRG's Blog       Alex.Davies's Blog       AliceE's Blog       Andrew Clarke's Blog       Andrew Hunter's Blog       Bart Read's Blog       Ben Adderson's Blog       BobCram's Blog       bradmcgehee's Blog       Brian Donahue's Blog       Charles Brown's Blog       Chris Massey's Blog       CliveT's Blog       Damon's Blog       David Atkinson's Blog       David Connell's Blog       Dr Dionysus's Blog       drsql's Blog       FatherJack's Blog       Flibble's Blog       Gareth Marlow's Blog       Helen Joyce's Blog       James's Blog       Jason Crease's Blog       John Magnabosco's Blog       Laila's Blog       Lionel's Blog       Matt Lee's Blog       mikef's Blog       Neil Davidson's Blog       Nigel Morse's Blog       Phil Factor's Blog       red@work's Blog       reka.burmeister's Blog       Richard Mitchell's Blog       RobbieT's Blog       RobertChipperfield's Blog       Rodney's Blog       Roger Hart's Blog       Simon Cooper's Blog       Simon Galbraith's Blog       TheFutureOfMonitoring's Blog       Tim Ford's Blog       Tom Crossman's Blog       Tony Davis's Blog       As well as these blogs, you also have the forums.... SQL Server for Beginners Forum     Programming SQL Server Forum    Administering SQL Server Forum    .NET framework Forum    .Windows Forms Forum   ASP.NET Forum   ADO.NET Forum 

    Read the article

  • Critical Patch Update For Oracle Fusion Middleware – CPU October 2012 by Daniel Mortimer

    - by JuergenKress
    The latest Critical Patch Update (CPU) has been released for Oracle products. Start your reading here. Patch Set Update and Critical Patch Update October 2012 Availability Document [ID 1477727.1] Oracle Fusion Middleware 11g Release 2 11.1.2.0 Oracle Fusion Middleware 11g Release 1 11.1.1.4 (Portal,Forms,Reports and Discoverer) 11.1.1.5 11.1.1.6 Oracle Application Server 10g Release 3 10.1.3.5 Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: patch ofm,critical patch,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an <code>ID</code> property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side <code>ID</code> value into a client-side <code>id</code> attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side <code>ID</code> property and the generated client-side <code>id</code>, but in reality things aren't so simple. By default, the rendered client-side <code>id</code> is formed by taking the Web control's <code>ID</code> property and prefixed it with the <code>ID</code>

    Read the article

  • When should JavaScript generate HTML?

    - by VirtuosiMedia
    I try to generate as little HTML from JavaScript as possible. Instead, I prefer to manipulate existing markup whenever I can and only generate HTML when I need to dynamically insert an element that isn't a good candidate for using Ajax. This, I believe, makes it far easier to maintain the code and quickly make changes to it because the markup is easier to read and trace. My rule of thumb is: HTML is for document structure, CSS is for presentation, JavaScript is for behavior. However, I've seen a lot of JS code that generates mounds of HTML, including entire forms and content-heavy modal dialogs. In general, which method is considered best practice? In what circumstances should JavaScript be used to generate HTML and when should it not?

    Read the article

  • March 21st Links: ASP.NET, ASP.NET MVC, AJAX, Visual Studio, Silverlight

    Here is the latest in my link-listing series. If you havent already, check out this months "Find a Hoster page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.  [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET URL Routing in ASP.NET 4: Scott Mitchell has a nice article that talks about the new URL routing features coming to Web Forms...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • Does code-generation increase the code quality?

    - by platzhirsch
    Arguing for code-generation I am looking for some reasons, if howsoever, code generation increases the code quality, respectively is in favor for quality insurance. To clarify what I mean with code-generation I can talk only about a project of mine: We use XML files to describe different relationships, in fact our database schema. These XML files are used to generate our ORM framework and HTML forms which can be used to add, delete and modify entities. To my mind, it increases the quality, as the human error is reduced. If someone was implemented wrong, it is broken in the model. This is good, because the error might appear a lot faster, as more generated code is broken, too.

    Read the article

  • ADF Faces Layouts Demo - A Hidden Treasure

    - by shay.shmeltzer
    Layouting pages with ADF Faces containers is sometimes not as simple as we would have liked it to be - especially for people who are just getting started. There are some tricks that can help you achieve the layout that you are looking for. One great way to learn some of those tricks is to look at the new "Visual Design" section of the ADF Faces Hosted demo. For example look at the Form Layout part - and you'll see nicely aligned forms that contain various UI layout scenarios. Want to learn how this has been achieved? - just click the "page source" link at the top right - and you can see how that layout has been done. Don't forget that you can also download the full demo source here. One other good resource I came across today is the "Designing well known websites with ADF Rich Faces" presentation from Maiko Rocha and George Magessy - it's missing the demo part - but you can still learn a lot from the slides. Designing well known websites with ADF Rich Faces

    Read the article

  • A few announcements for those in the UK

    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This a quick post to announce a few upcoming events for those in the UK. Ill be presenting in Glasgow, Scotland on March 25th Im doing a free 5 hour presentation in Glasgow on March 25th. Ill be covering VS 2010, ASP.NET 4, ASP.NET Web Forms 4, ASP.NET MVC 2, Silverlight and potentially show off a few new things that havent been announced yet. You can learn...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • How to submit sitemap when your website has partial https? - Error: "Not in Domain"

    - by Ralph N
    My website is an ecommerce that is set up to do http for the item browsing portion, but https for things like shopping cart, contact us, etc.. (anything that has forms on it). I've submitted my website a long time ago to google webmaster tools as http://www.mywebsite.com. I also submitted a sitemap with about 40 links - 8 of them are https. I've noticed that for the longest time, google webmaster tools was reporting that 32 out of the 40 links have been crawled. I tested all the links against my robots.txt and realized that my robots text was blocking the https links. Google says those links are "Not In Domain". Is there a way i'm supposed to get around this so that I can have a hybrid-ssl site? I understand the concept that one site is mywebsite.com:80 and the other is mywebsite.com:443, but i'd like to avoid submitting and maintaining 2 seperate websites on google webmaster tools.

    Read the article

  • Would you go by WPF or WinForms? [on hold]

    - by Lorem Ipsum
    Consider the following project facts/requirements: Desktop app with try icon and notification system Forms over data Quick response needed Internal app in big corporation planned to be hosted on Windows Vista, later maybe Windows 8.x Operating system slowed down by many group policies (frequently changing GPO) No special graphic requirements So, would you go by WPF or WinForms? Edit: Please bear in mind the facts/requirements I mentioned above. The application will run on corporate machines which are very slow and without really good graphical acceleration. The crucial is to have really quick response and start time.

    Read the article

  • Not to miss! Today’s web seminar on content integration with Oracle Apps

    - by Lance Shaw
    Hello everyone.  The first web seminar in a three-part series kicks off later today, focused on the value of delivering and controlling the flow of content in the context of your most critical business applications.   If you are using Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards EnterpriseOne or Siebel CRM, we heartily recommend you investigate the value of centralizing the delivery of scanned images, forms, faxes and digital documents within those processes.  The improvements in efficiency and productivity can result in some impressive cost savings. One customer recently reported that they had realized an impressive ROI of 180% and that the investment in this new technology had paid for itself in a mere 6 months.  We hope you can spare some time today to join us at 1pm Eastern Time / 10am Pacific Time / 18:00 GMT. We think you will find it time well spent.   Click here to attend.  We look forward to seeing you there!

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Sharp HealthCare Reduces Storage Requirements by 50% with Oracle Advanced Compression

    - by [email protected]
    Sharp HealthCare is an award-winning integrated regional health care delivery system based in San Diego, California, with 2,600 physicians and more than 14,000 employees. Sharp HealthCare's data warehouse forms a vital part of the information system's infrastructure and is used to separate business intelligence reporting from time-critical health care transactional systems. Faced with tremendous data growth, Sharp HealthCare decided to replace their existing Microsoft products with a solution based on Oracle Database 11g and to implement Oracle Advanced Compression. Join us to hear directly from the primary DBA for the Data Warehouse Application Team, Kim Nguyen, how the new environment significantly reduced Sharp HealthCare's storage requirements and improved query performance.

    Read the article

  • Web-based CMS for mobile app

    - by JWood
    I'm just about to start developing a mobile app which needs to be fed from a CMS. I started designing the tables when I thought there must be something out there which could save me a load of time and let me concentrate on the mobile side of things. So, I'm looking for a CMS that will let me create hierarchical "pages" which will just be 4-5 database fields with a simple front-end to allow to edit and update them. I don't mind having to write some code to layout the database and forms etc, any saving on starting from scratch would be good. The only requirement is that I be able to access the data via some sort of web service, REST, JSON, XML, anything really... Can anyone suggest anything that might help? Thanks, J

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >