Search Results

Search found 24177 results on 968 pages for 'true'.

Page 339/968 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Will it be possible to use a non-pae kernel in 12.10

    - by Roland Taylor
    I know that Ubuntu +1 questions are frowned upon, but this I believe is a fair exception. Currently I have 2 systems running Ubuntu 12.10, and one of them has a Pentium M that doesn't support PAE (strange I know, but true). This has meant in the past that I had to rely on a custom iso to install Ubuntu a similar system,and so this time I went with Xubuntu 12.04. My question is 2 fold, but really one question: Is it/will it be possible to install a non-pae version of the 12.10 kernel from the standard repositories? If no, how can I get such a kernel? (Is there a PPA with such a kernel available?). NB: Before anyone suggests that I just install this package: http://packages.ubuntu.com/quantal/linux-image-generic, please note that this comes with PAE enabled. P.S. Yes, I have Googled. I haven't found the answer.

    Read the article

  • can't start vino VNC service on Ubuntu 12.04

    - by user1689961
    I just installed vino, but when I run it, I get the following error. # ./start_vnc (vino-server:2502): EggSMClient-CRITICAL **: egg_sm_client_set_mode: assertion `global_client == NULL || global_client_mode == EGG_SM_CLIENT_MODE_DISABLED' failed ** Message: The desktop sharing service is already running, exiting. MORE DETAILS: UltraVNC client running on a Windows computer can login and shows the Ubuntu desktop, and controls the Ubuntu mouse to do things, but the VNC client view at the Windows computer does NOT show any changes to the display at the Ubuntu desktop, only the original desktop view at the time of VNC client login. UPDATE I solved it by following the askubuntu post: "VNC session very slow in 12.04 compared to older versions", which said to do this: gsettings set org.gnome.Vino disable-xdamage true ..and it worked. But should I be concerned about the error messages?

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Surface development: it&rsquo;s just like software development

    - by Dennis Vroegop
    Surface is magic. Everyone using it seems to think that way. And I have to be honest, after working for almost 2 years with the platform I still get that special feeling the moment I turn on the unit to do some more work. The whole user experience, the rich environment of the SDK, the touch, even the look and feel of the Surface environment is so much different from the stuff I’ve been working on all my career that I am still bewildered by it. But… and this is a big but.. in the end we’re still talking about a computer and that needs software to become useful. Deep down the magic of the Surface unit there is a PC somewhere, running Windows Vista and the .net framework 3.5. When you write that magic software that makes the platform come alive you’re still working with .net, WPF/XNA, C#, VB.Net and all those other tools and technologies you know so well. Sure, the whole user experience is different from what you’ve known. And the way of thinking about users, their interaction and the positioning of screen elements requires a whole new paradigm. And that takes time. It took me about half a year before I had the feeling I got it nailed down. But when that moment came (about 18 months ago…) I realized that everything I had learned so far on software development still is true when it comes to Surface. The last 6 months I have been working with some people with a different background to start a new company. The idea was that the new company would be focussing on Surface and Surface only. These people come from a marketing background and had some good ideas for some applications. And I have to admit: their ideas were good. Very good. Where it all fell down of course is that these ideas need to be implemented in a piece of software. And creating great software takes skilled developers and a lot of time and money. That’s where things went wrong: the marketing guys didn’t realize and didn’t want to realize that software development is a job that takes skill. You can’t just hire a bunch of developers and expect them to deliver the best sort of software, especially not when it comes to Surface. I tried to explain that yes, their User Interface in Photoshop looked great, but no: I couldn’t develop an application like that in a weeks time. Even worse: the while backend of the software (WCF for communications, SQL Server for the database, etc) would take a lot more time than the frontend. They didn’t understand. It took them a couple of days to drawn the UI in Photoshop so in Blend I should be able to build the software in about the same amount of time. Well, you and I know that it doesn’t work that way. Software is hard to write, and even harder to write well, and it takes skill and dedication. It’s not something you can do as fast as you can draw a mock up for a Surface application in Photohop. The same holds true for web applications of course. A lot of designers there fail to appreciate the hard work that goes into writing the plumbing for a good web app that can handle thousands of users. Although the UI is very important, it’s not all there is to it. And in Surface development this is the same. The UI should create the feeling of magic, but the software behind it is what makes it come alive. And that takes time. A lot of time. So brush of you skills and don’t throw them away if you start developing for Surface. Because projects (and colaborations) can fail there as hard as they can in any other area of software development. On a side note: we decided to stop the colaboration (something the other parties involved didn’t appreciate and were very angry about) and decided to hire a designer for the Surface projects. The focus is back where it belongs: on the software development we know so well and have been doing very well for 13 years. UI is just a part of the whole project and not the end product. So my company Detrio is still going strong when it comes to develivering Surface solutions but once again from a technological background, not a marketing background.

    Read the article

  • Oracle Linux at Oracle Openworld 2011

    - by Zeynep Koch
    In the Oracle Linux track, you'll learn how organizations of all sizes, in all industries, worldwide, are realizing the true benefits of complete and integrated solutions with Oracle Linux and Oracle's world-class Linux support program. Find out what Oracle is doing to simplify the development, deployment, and management of Linux solutions via significant testing initiatives including the Oracle Validated Configurations program. Also discover how Oracle is driving the enterprise Linux technology roadmap with new features and enhancements, making Linux a faster, better operating system for all. Meet Oracle's Linux engineers, experts, customers, and partners, and get answers to all your Linux questions. Here are the Linux sessions and demos that you don't want to miss. · Oracle Linux Strategy and Roadmap · New Features in Oracle Linux · End-to-End Data Integrity Solution for Linux · Debugging and Configuration Best Practices for Oracle Linux · Demos · Hands-on-Labs Register by July 29 and get a $500 discount.http://bit.ly/kSjDMD

    Read the article

  • Sam Abraham To Speak about MVC & MVVM at InterClick on May 19th

    - by Sam Abraham
    My next speaking engagement will be taking place at InterClick in Boca Raton, FL on Wednesday May 19th 2010.  Here is a quick abstract of what I will be blabbing about: MVC & MVVM are two of many buzzwords under the Architecture Spotlight as means of achieving true separation of concerns between data, business logic and UI layers. In this session, we will be discussing the basic concepts of Microsoft MVC and demonstrating the ease of creating a new MVC project and related Unit Tests within VS2010. We will then move to introduce MVVM as a design paradigm and incorporating that into an MS MVC application structure. Next, we will take a look at MVVM in the context of a sample Silverlight application. Throughout our talk we will be demonstrating various features of the latest and greatest VS2010. You can get more information about the event and the speaker, as well as register to attend at this link: http://sherstaff.com/EventSignUp.aspx?EventID=777 Look forward to seeing you all there.

    Read the article

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Enable Claims based Auth on a SP2010 website, after it has been provisioned

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When you provision a web app in SP2010, you can choose it to use Claims Based Auth or Classic Auth right through the GUI.  However, after you have provisioned a web app, there is no GUI to switch from Classic to Claims based. So the below powershell script will let you convert a SP2010 website to claims based auth after it has been provisioned. 1: $w = Get-SPWebApplication "http://sp2010" 2: $w.UseClaimsAuthentication = "True"; 3: $w.Update() The user running the above script should be a member of the SharePoint_Shell_Access role on the config DB, and a member of the WSS_ADMIN_WPG local group. Comment on the article ....

    Read the article

  • DataTables warning (table id = 'example-advanced'): Cannot reinitialise DataTable while using treetable and datatable at the same time

    - by Nyaro
    DataTables warning (table id = 'example-advanced'): Cannot reinitialise DataTable while using treetable and datatable at the same time. Here is my code: <script src="jquery-1.7.2.min.js"></script> <script src='jquery.dataTables.min.js'></script> <script src="jquery.treetable.js"></script> <script> $("#example-advanced").treetable({ expandable: true }); </script> <script> $('#example-advanced').dataTable( { "bSort": false } ); </script> Actually I wanted to get rid of the sorting part of the datatable coz it was giving error in treetable display so i want the sorting part from the datatable out and keep other functions like search and pagination. Please help me out. Thanks in advance.

    Read the article

  • juju spends bootstrap-timeout with a final message it cannot find /var/lib/juju/nonce.txt

    - by user285199
    I build two VMware's machines. First one with MAAS, second one with a fresh installation from MAAS. Region controller was installed with Ubuntu 12.04 distribution, and upgraded (. Node computing was installed from MAAS with Quantal 12.10. Juju was installed and upgraded to 1.18 (from ppa:juju/stable repository). MAAS was upgraded from cloud-archive:tools repository. In debug mode, I got how Juju connects to node. Then I run the same instruction: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/lliurex/.juju/ssh/juju_id_rsa -i /home/lliurex/.ssh/id_rsa [email protected] /bin/bash It worked (with and without /bin/bash). When Juju spends all bootstrap-timeout tells it has not found /var/lib/juju/nonce.txt file. It's true, it doesn't exist. It doesn't mind if you put a timeout of 1800, 3600 or 72000, it always finishes the same.

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • Aero Isn’t Gone in Windows 8: 6 Aero Features You Can Still Use

    - by Chris Hoffman
    Many people think Aero is completely gone in Windows 8, but this isn’t true. Microsoft hasn’t helped matters by saying they’ve “moved beyond Aero” in several blog posts. However, hardware acceleration and most Aero features are still present. Aero is more than Glass. What’s actually gone is the Aero branding and the Aero Glass theme with transparent, blurred window borders. The Flip 3D feature, which wasn’t used by many Windows users, has also been removed. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Microsoft SDE Interview vs Microsoft SDET Interview and Resources to Study

    - by vinayvasyani
    I have always heard that SDE interviews are much harder to crack than SDET. Is it really true? I have also heard that if candidate doesnt do well in SDE interview he is also sometimes offered SDET position. How much truth is there into these talks? I would highly appreciate if someone would put good resources and guidelines for how to prepare for Microsoft interviews..which books to read, which notes, online programming questions websites, etc. Give as much info as possible. Thanks in advance to everyone for your valuable help and contribution.

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SanjayP&rsquo;s venture after Microsoft involves no Microsoft

    - by eddraper
    When I was at Microsoft, I always found Sanjay Parthasarathy to be a bright and passionate leader.  While he was a bit disconnected at times with what was really going on out in the trenches, I always thought he was true believer in what we in Developer Platform and Evangelism (DPE) were doing.  He got it.  He had started DPE and kicked a lot of doors down up in Redmond to make it happen.  Back in the early 2000s, battles over platform choices at large customers was trench warfare… bayonets and hand grenades at the P-Code level.  This model was not at all suited to Microsoft’s org structure at the time.  While there were plenty of people fully able to have competitive conversations around Windows Server, or AD, or Exchange, or the desktop, there weren’t many that could have deep technical conversations around Java vs .NET and the platform “stack” as a cohesive, unified unit of value.  This task fell to DPE. Sanjay ended up leaving Microsoft a number of months before me in 2009 and I remember thinking these exact words: “holy shit, SanjayP left Microsoft.”  When SanjayP left DPE years before that,  Sheila Gulati had stepped into his shoes and I thought we where starting to miss a beat.  Sheila had built an amazing business at Microsoft India, but I don’t recall being inspired by her as a leader.  SanjayP’s talks felt like the opening scene of “Patton” with George C. Scott pacing in front of the American flag.  Sheila was a voice on a con-call.  When she moved on in 2007, Walid Abu-Hadba was given the reigns.  Personally, I don’t ever recall even seeing his face.  I think I might recall hearing his voice on some con-calls, but for all intents and purposes he was invisible to me.  Perhaps this was the beginning of my carelessness around seeking “visibility.” Fast forward to Build 2011.  First off, we have no PDC – we have Build.  Microsoft had made an 11 year investment by this time in building an organization to make its technology relevant to developers.  One would think such an org would be in the driver’s seat of such an event, but we see Windows product group people on the podiums.  Watching, I could see the messaging unfold… but no story.  It was like the old days.  Demos and PowerPoints by team members building the tech, and in many cases VPs.  The ensuing confusion is almost legendary now.  Windows 8 was, and is, a pretty big deal… but who is telling the story – not just features and benefits, but the story around how it all fits together. Having been out of Microsoft for two years now, and looking in, I can only conclude that the “DPE of old” has at best been emasculated, and at worst been completely marginalized by internal politics, or perhaps the eternal march of the corporate entropy generator that resides at all large companies.  I don’t think this is a good thing for anyone. And now, back to Sanjay who is the father of Microsoft DPE… I noticed that he has moved back to India and is doing start-up work.  His current company Indix looks to be doing some interesting things with “big data” and here’s their stack: Nary a trace of anything Microsoft.  What could account for this?  I wonder….  Better availability of labor and expertise in India for this stack?  Donno, but even in India, leet R and Hadoop skills have to be hard to find. Technical superiority?  This, I sincerely doubt. This stack, with SanjayP’s name as CEO leaves me with an unsettling feeling.  If he did believe, he no longer does.  One doesn’t place bets with real money on things they don’t believe in.  Perhaps he never did believe, and was a corporate creature seeking to find a niche for himself after which he manipulated me and others.  Or perhaps… anger… be it passive aggression or an outright “in your face F*** you” to his former masters. I guess in the end, only he knows the true reason… But I have my theory...

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

  • Finding the problem on a partially succeeded build

    - by Martin Hinshelwood
    Now that I have the Build failing because of a genuine bug and not just because of a test framework failure, lets see if we can trace through to finding why the first test in our new application failed. Lets look at the build and see if we can see why there is a red cross on it. First, lets open that build list. On Team Explorer Expand your Team Project Collection | Team Project and then Builds. Double click the offending build. Figure: Opening the Build list is a key way to see what the current state of your software is.   Figure: A test is failing, but we can now view the Test Results to find the problem      Figure: You can quite clearly see that the test has failed with “The device is not ready”. To me the “The Device is not ready” smacks of a System.IO exception, but it passed on my local computer, so why not on the build server? Its a FaultException so it is most likely coming from the Service and not the client, so lets take a look at the client method that the test is calling: bool IProfileService.SaveDefaultProjectFile(string strComputerName) { ProjectFile file = new ProjectFile() { ProjectFileName = strComputerName + "_" + System.DateTime.Now.ToString("yyyyMMddhhmmsss") + ".xml", ConnectionString = "persist security info=False; pooling=False; data source=(local); application name=SSW.SQLDeploy.vshost.exe; integrated security=SSPI; initial catalog=SSWSQLDeployNorthwindSample", DateCreated = System.DateTime.Now, DateUpdated = System.DateTime.Now, FolderPath = @"C:\Program Files\SSW SQL Deploy\SampleData\", IsComplete=false, Version = "1.3", NewDatabase = true, TimeOut = 5, TurnOnMSDE = false, Mode="AutomaticMode" }; string strFolderPath = "D:\\"; //LocalSettings.ProjectFileBasePath; string strFileName = strFolderPath + file.ProjectFileName; try { using (FileStream fs = new FileStream(strFileName, FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(ProjectFile)); using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateTextWriter(fs)) { serializer.WriteObject(writer, file); } } } catch (Exception ex) { //TODO: Log the exception throw ex; return false; } return true; } Figure: You can see on lines 9 and 18 that there are calls being made to specific folders and disks. What is wrong with this code? What assumptions mistakes could the developer have made to make this look OK: That every install would be to “C:\Program Files\SSW SQL Deploy” That every computer would have a “D:\\” That checking in code at 6pm because the had to go home was a good idea. lets solve each of these problems: We are in a web service… lets store data within the web root. So we can call “Server.MapPath(“~/App_Data/SSW SQL Deploy\SampleData”) instead. Never reference an explicit path. If you need some storage for your application use IsolatedStorage. Shelve your code instead. What else could have been done? Code review before check-in – The developer should have shelved their code and asked another dev to look at it. Use Defensive programming – Make sure that any code that has the possibility of failing has checks. Any more options? Let me know and I will add them. What do we do? The correct things to do is to add a Bug to the backlog, but as this is probably going to be fixed in sprint, I will add it directly to the sprint backlog. Right click on the failing test Select “Create Work Item | Bug” Figure: Create an associated bug to add to the backlog. Set the values for the Bug making sure that it goes into the right sprint and Area. Make your steps to reproduce as explicit as possible, but “See test” is valid under these circumstances.   Figure: Add it to the correct Area and set the Iteration to the Area name or the Sprint if you think it will be fixed in Sprint and make sure you bring it up at the next Scrum Meeting. Note: make sure you leave the “Assigned To” field blank as in Scrum team members sign up for work, you do not give it to them. The developer who broke the test will most likely either sign up for the bug, or say that they are stuck and need help. Note: Visual Studio has taken care of associating the failing test with the Bug. Save… Technorati Tags: WCF,MSTest,MSBuild,Team Build 2010,Team Test 2010,Team Build,Team Test

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • ASP.NET List Control

    - by Ricardo Peres
    Today I developed a simple control for generating lists in ASP.NET, something that the base class library does not contain; it allows for nested lists where the list item types and images can be configured on a list by list basis. Since it was a great fun to develop, I'd like to share it here. Here is the code: [ParseChildren(true)] [PersistChildren(false)] public class List: WebControl { public List(): base("ul") { this.Items = new List(); this.ListStyleType = ListStyleType.Auto; this.ListStyleImageUrl = String.Empty; this.CommonCssClass = String.Empty; this.ContainerCssClass = String.Empty; } [DefaultValue(ListStyleType.Auto)] public ListStyleType ListStyleType { get; set; } [DefaultValue("")] [UrlProperty("*.png;*.gif;*.jpg")] public String ListStyleImageUrl { get; set; } [DefaultValue("")] [CssClassProperty] public String CommonCssClass { get; set; } [DefaultValue("")] [CssClassProperty] public String ContainerCssClass { get; set; } [Browsable(false)] [PersistenceModeAttribute(PersistenceMode.InnerProperty)] public List Items { private set; get; } protected override void Render(HtmlTextWriter writer) { String cssClass = String.Join(" ", new String [] { this.CssClass, this.ContainerCssClass }); if (cssClass.Trim().Length != 0) { this.CssClass = cssClass; } if (String.IsNullOrEmpty(this.ListStyleImageUrl) == false) { this.Style[ HtmlTextWriterStyle.ListStyleImage ] = String.Format("url('{0}')", this.ResolveClientUrl(this.ListStyleImageUrl)); } if (this.ListStyleType != ListStyleType.Auto) { switch (this.ListStyleType) { case ListStyleType.Circle: case ListStyleType.Decimal: case ListStyleType.Disc: case ListStyleType.None: case ListStyleType.Square: this.Style [ HtmlTextWriterStyle.ListStyleType ] = this.ListStyleType.ToString().ToLower(); break; case ListStyleType.LowerAlpha: this.Style [ HtmlTextWriterStyle.ListStyleType ] = "lower-alpha"; break; case ListStyleType.LowerRoman: this.Style [ HtmlTextWriterStyle.ListStyleType ] = "lower-roman"; break; case ListStyleType.UpperAlpha: this.Style [ HtmlTextWriterStyle.ListStyleType ] = "upper-alpha"; break; case ListStyleType.UpperRoman: this.Style [ HtmlTextWriterStyle.ListStyleType ] = "upper-roman"; break; } } base.Render(writer); } protected override void RenderChildren(HtmlTextWriter writer) { foreach (ListItem item in this.Items) { this.writeItem(item, this, 0); } base.RenderChildren(writer); } private void writeItem(ListItem item, Control control, Int32 depth) { HtmlGenericControl li = new HtmlGenericControl("li"); control.Controls.Add(li); if (String.IsNullOrEmpty(this.CommonCssClass) == false) { String cssClass = String.Join(" ", new String [] { this.CommonCssClass, this.CommonCssClass + depth }); li.Attributes [ "class" ] = cssClass; } foreach (String key in item.Attributes.Keys) { li.Attributes[key] = item.Attributes [ key ]; } li.InnerText = item.Text; if (item.ChildItems.Count != 0) { HtmlGenericControl ul = new HtmlGenericControl("ul"); li.Controls.Add(ul); if (String.IsNullOrEmpty(this.ContainerCssClass) == false) { ul.Attributes["class"] = this.ContainerCssClass; } if ((item.ListStyleType != ListStyleType.Auto) || (String.IsNullOrEmpty(item.ListStyleImageUrl) == false)) { if (String.IsNullOrEmpty(item.ListStyleImageUrl) == false) { ul.Style[HtmlTextWriterStyle.ListStyleImage] = String.Format("url('{0}');", this.ResolveClientUrl(item.ListStyleImageUrl)); } if (item.ListStyleType != ListStyleType.Auto) { switch (this.ListStyleType) { case ListStyleType.Circle: case ListStyleType.Decimal: case ListStyleType.Disc: case ListStyleType.None: case ListStyleType.Square: ul.Style[ HtmlTextWriterStyle.ListStyleType ] = item.ListStyleType.ToString().ToLower(); break; case ListStyleType.LowerAlpha: ul.Style [ HtmlTextWriterStyle.ListStyleType ] = "lower-alpha"; break; case ListStyleType.LowerRoman: ul.Style [ HtmlTextWriterStyle.ListStyleType ] = "lower-roman"; break; case ListStyleType.UpperAlpha: ul.Style [ HtmlTextWriterStyle.ListStyleType ] = "upper-alpha"; break; case ListStyleType.UpperRoman: ul.Style [ HtmlTextWriterStyle.ListStyleType ] = "upper-roman"; break; } } } foreach (ListItem childItem in item.ChildItems) { this.writeItem(childItem, ul, depth + 1); } } } } [Serializable] [ParseChildren(true, "ChildItems")] public class ListItem: IAttributeAccessor { public ListItem() { this.ChildItems = new List(); this.Attributes = new Dictionary(); this.Text = String.Empty; this.Value = String.Empty; this.ListStyleType = ListStyleType.Auto; this.ListStyleImageUrl = String.Empty; } [DefaultValue(ListStyleType.Auto)] public ListStyleType ListStyleType { get; set; } [DefaultValue("")] [UrlProperty("*.png;*.gif;*.jpg")] public String ListStyleImageUrl { get; set; } [DefaultValue("")] public String Text { get; set; } [DefaultValue("")] public String Value { get; set; } [Browsable(false)] public List ChildItems { get; private set; } [Browsable(false)] public Dictionary Attributes { get; private set; } String IAttributeAccessor.GetAttribute(String key) { return (this.Attributes [ key ]); } void IAttributeAccessor.SetAttribute(String key, String value) { this.Attributes [ key ] = value; } } [Serializable] public enum ListStyleType { Auto = 0, Disc, Circle, Square, Decimal, LowerRoman, UpperRoman, LowerAlpha, UpperAlpha, None } SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • JavaScript Intellisense with Telerik in ASP.NET Master Page Project with VS 2010

    - by Otto Neff
    Today I was looking for a solution to get finally the JScript/Javascript/jQuery Intellisense Featureworking with my ASP.Net Webform Project to work. I found some good articles: - JScript IntelliSense Overview- JScript IntelliSense: A Reference for the “Reference” Tag- Enabling JavaScript intellisense in VS.NET 2010 to work with SharePoint 2010- Rich IntelliSense for jQueryBUT, all of suggested solutions did not work right with my Master Page based Visual Studio 2010 Solution.Only with physical Javascript Files (Telerik includes certain Javascript Files like jQuery as Ressource) or/andconfigure always a new ASP.NET Scriptmanager / RadScriptManager on every page derived from the Master Page, wasn't exactly what I was looking for. So I came up with the following simple Solution, to Trick VS2010and still have the Project running with multiple runat="server" Scriptmanagers. In short:- New ASP.NET control derived from ScriptManager with emtpy overwritten OnInit() to use it as emtpy wrapper for VS2010. In detail:New RadScriptManager Classusing System; using System.Collections.Generic; using System.Linq; using System.Web; using Telerik.Web.UI; namespace IntellisenseJavascript.Controls { public class IntelliJS : RadScriptManager { protected override void OnInit(EventArgs e) { } protected override void OnPreRender(EventArgs e) { } protected override void OnLoad(EventArgs e) { } protected override void Render(System.Web.UI.HtmlTextWriter writer) { } public override void RenderControl(System.Web.UI.HtmlTextWriter writer) { } } } web.config<configuration> ... <system.web> ... <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4"/> <add tagPrefix="VSFix" namespace="IntellisenseJavascript.Controls" assembly="IntellisenseJavascript"/> </controls> </pages> ... Master Page<%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.master.cs" Inherits="IntellisenseJavascript.Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="head" runat="server"> <title></title> <telerik:RadStyleSheetManager ID="radStyleSheetManager" runat="server" /> </head> <body> <form id="form" runat="server"> <telerik:RadScriptManager ID="radScriptManager" runat="server"> <Scripts> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.Core.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQuery.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQueryInclude.js" /> </Scripts> </telerik:RadScriptManager> <telerik:RadAjaxManager ID="radAjaxManager" runat="server"> </telerik:RadAjaxManager> <div> #MASTER CONTENT# <asp:ContentPlaceHolder ID="contentPlaceHolder" runat="server"> </asp:ContentPlaceHolder> </div> </form> <script type="text/javascript"> $(function () { // Masterpage ready $('body').css('margin', '50px'); }); </script> </body> </html> ASPX Page<%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="IntellisenseJavascript.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="contentPlaceHolder" runat="server"> <VSFix:IntelliJS runat="server" ID="intelliJS"> <Scripts> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.Core.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQuery.js" /> <asp:ScriptReference Assembly="Telerik.Web.UI, Version=2011.3.1115.40, Culture=neutral, PublicKeyToken=121fae78165ba3d4" Name="Telerik.Web.UI.Common.jQueryInclude.js" /> </Scripts> </VSFix:IntelliJS> <div style="border: 5px solid #FF9900;"> #PAGE CONTENT# </div> <script type="text/javascript"> $(function () { // Page ready $('body').css('border', '5px solid #888'); }); </script> </asp:Content> The Result I know, this is not the way it meant to be... but now at least you can have a Main ScriptManager for all Common Scripts and Settings, inject page specific Javascripts in PageLoad Event in normal ASPX Files and have JavaScript Intellisense for defined Scripts from JS Files or Assembly Ressouce in your Content Maybe, vNext will fix this.

    Read the article

  • iPhone Peripherals for Retailers

    - by David Dorf
    I saw RedLaser on the latest "Shopper" iPhone commercial on TV. Works great for consumers, but retailers will be more interested in a true barcode reader from someone like Infinite Peripherals, which also comes with a magstripe reader I previously mentioned the offerings from Square Verifone, and Mophie that allow swiping credit cards with an iPhone as well. So what's next? There's a decent list at WireLust that includes an IR dongle that turns your iPhone into a TV remote, armband monitors for use when exercising, and most recently a NFC/RFID reader. iCarte from Canadian firm Wireless Dynamics looks interesting. This device can be used for NFC payments and for reading RFID tags. The Canon printer I just bought for home has an iPhone app that lets me send iPhone pictures directly to the printer for printing. In that same vein, Seems like retailers could use bluetooth to print receipts on strategically place printers on the floor. I can't wait to see what they come up with for the iPad.

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • Upgrading VSIX extensions from VS2012 to VS2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/27/upgrading-vsix-extensions-from-vs2012-to-vs2013.aspx  As consumers of your Visual Studio extensions start to move over to VS 2013, you will have to upgrade the Visual Studio extensions you build for Visual Studio 2012 to Visual Studio 2013 and republish to the Visual Studio extension gallery. Failing which, it will not be possible for your consumers to install and use your extensions on Visual Studio 2013.   Objective In this blog post, I’ll show you how simple it is to upgrade your Visual Studio 2012 extension to Visual Studio 2013. There aren’t any reported breaking changes between VS 2012 SDK and VS 2013 SDK, the upgrade usually involves, rebuilding the extension against VS 2013 SDK and updating the vsix manifest file.              Walkthrough Download the Visual Studio 2013 SDK - You will need to download the Visual Studio 2013 SDK in order to open up the Visual Studio extension project in Visual Studio 2013. The SDK can be downloaded from here. Install the SDK before you proceed.                2. Once the VS 2013 SDK has been installed, open up your package project. For the purposes of this blog post, I’ll open up the Avanade Extension – Software Inventory in Visual Studio 2013. You will notice that Visual Studio doesn’t load the project but let’s you know that the project needs to be Migrated.                  3. Right click the project and choose the option ‘Reload Project’ from the Context Menu.                  4. Choosing the Reload Project option brings up an upgrade window, telling you that the upgrade is a one way only upgrade i.e. the project will be changed to work with Visual Studio 2013 and you will not be able to open the project up in Visual Studio 2012. My recommendation would be to create a Visual Studio 2013 branch and upgrading the project in that branch only, so if you need to go back to Visual Studio 2012 project at some point, you have a handy reference in a separate branch.             5. Upon clicking Ok, the project is updated. See below, the following changes are made at the time of upgrade,           - The runtime version is updated in the Resources.Designer.cs file                      - The Minimum version of Visual Studio in the package project file is changed from 11.0 to 12.0                    6. Reference VS 2013 dll’s rather than VS 2012 dll’s. So reference Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.Controls.dll from C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v2.0 and C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ReferenceAssemblies\v4.5. If you have any other API references, then change the references to point to VS 2013 instead of VS 2012.                          7. Rebuild your solution to ensure there are no breaking changes. Success!                8. Update VSIX Manifest file (the file source.extnsion.vsixmanifest contains the meta data for your VSIX).          - Update the Install Targets from 11.0 to 12.0. This basically enforces that the extension can be installed on Visual Studio 2013 version of Visual Studio.                         - Update the Dependencies from Visual Studio MPF 11.0 to Visual Studio MPF 12.0              9. Rebuild the solution and open up the bin folder for the Package project and look for the file *.vsix file [Microsoft Visual Studio Extension].         - This is basically the installer for your extension.                 - Double click the installer to launch the installer wizard. Viola! You can see the package installation wizard opens up and gives you the option to install the extension for Visual Studio 2013.                    - Click Install to Continue                    - Note – If you run into the exception “23/06/2013 10:42:18 - Install Error : Microsoft.VisualStudio.ExtensionManager.InstallByMsiException: The InstalledByMSI element in extension Avanade Extensions cannot be 'true' when installing an extension through the Extensions and Updates Installer.  The element can only be 'true' when an MSI lays down the extension manifest file.” Ensure you have the option “This VSIX is installed by Windows Installer” unchecked in the Install Targets tab.        10. Verifying that the extension has installed correctly.           - Open Extension Manager and verify that the installed extension shows up in the extension manager “list of installed VSIX”.                      11. First Look at the updated Extension                         - The links have now been moved to the context menu, so to see the navigation links, you’ll have to right click on the icon and select the option from the context menu.                                        Note – The Avanade Extension being used in the demo has been developed by Utkarsh and Tarun. The Software Inventory Extension for Visual Studio 2012…  allows you to see the list of Software installed on the hosted build server right from with in Visual Studio,  the extension also allows you to export this list to excel. More details on how this has been implemented can be found here.   I hope you found this useful. In case you have any questions or feedback, feel free to reach out on Visual Studio extensibility MSDN forums or via Microsoft Visual Studio feedback forum. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >