Search Results

Search found 133768 results on 5351 pages for 'using directive'.

Page 25/5351 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Using 12.04 installation as a persistent pen drive

    - by Cawas
    Disclaimer: I aim to build a self contained pen drive with my application inside, so no matter about updates. Maybe I'm looking at the wrong linux distribution to do this... Please let me know if you think so. I've tried knoppix and even lubuntu, but they don't come with enough "drivers" for Unity3D to work. Creating a custom live persistent pen drive is a real pain and I'm trying for 1 day without any success. Sure, being able to do it would probably be ideal and occupy the minimum space. Using the installation image on a pen drive, however, is good enough and is really easy to create. We can even do it from any OS, using UNetBootin, LiLi USB Creator or some other methods. Straight forward. Some recommend installing it on a pen drive. But that requires a lot of space and, I believe, it won't behave as good as something meant to be installed on a usb disk, because of memory management. So, there are only a few negative points on using the installation image that I can think of. Question here, is how to remove those drawbacks: Having to press "Try Ubuntu". That's the big one. Couldn't find how. Unable to load everything on memory and keep on running without the pen drive (like this) Unable to remove "Install Ubuntu 12.04 LTS" app. Setting the ISO to use maximum amount of space for the OS will leave pen drive with zero space left and any file saved within it from ubuntu is inaccessible from the outside (when plugin the pen drive and not booting from it). Am I missing something? Can those points be fixed?

    Read the article

  • How to run multiple distros using lvm

    - by Mark
    I've seen quite a few posts around about running multiple distros but not sure they apply to using LVM (and without Windows). I'm using a machine that's about 3 years old. Setup: Intel Core i7 2.8GHz 8GB Ram 1TB SATA HDD At this point, I'd like to install 12.10 and Mint 14, leaving the option open to install additional distros down the road. I could be way off, but I'm thinking about creating at least 2 primary /boot partitions (1 for 12.10 and 1 for Mint) and another partition for LVM leaving room for additional /boot partitions. Then creating a VG and separate LVs for Ubuntu 12.10 and Linux Mint 14. I understand I can share partitions between the 2 installs, but I'm only using this for testing and I have tons of space to play with. LVM seemed logical considering I may want to install and test additional distros. I guess I could share the /swap partition across the board without problems, right? I'm unclear about GRUB2. How do I handle the bootloader situation? Install 12.10 and get it running then make changes to grub.cfg after installing Mint? And do I not install GRUB for Mint or do I install it in a different location? Any guidance would be appreciated.

    Read the article

  • Calculating up-vector to avoid gimbal lock using euler angles

    - by jessejuicer
    I wish to orbit a camera around a sphere, yet the problem is that when the camera rotates so that it is at the north pole (and pointing down) or the south pole (and pointing up) of the sphere the camera doesn't handle itself very well. It spins rapidly until arriving 180 degrees in the opposite direction. I believe this is known as gimbal lock. I understand you can avoid this problem using quaternions. But I also read in another forum that it's possible to avoid this easily using euler angles as well. Which I would prefer to do. It was said that all you need to do is "calculate a proper up-vector every frame, and that avoids the problem entirely." Well, I tried aligning the up-vector with the vertical axis of the camera whenever the camera changed orientation, but this didn't seem to work. Meaning that the up-vector followed exactly the orientation of the camera's y-axis (or it's up vector), instead of using a constant up-vector aligned to the up-vector of the world (0, 1, 0). How exactly do I go about calculating a proper up-vector as my camera orientation changes to avoid the gimbal lock problem mentioned above?

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

  • How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)

    - by Orgad Kimchi
    Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab. This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job. Summary of Lab Exercises This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:     Install Hadoop.     Edit the Hadoop configuration files.     Configure the Network Time Protocol.     Create the virtual network interfaces (VNICs).     Create the NameNode and the secondary NameNode zones.     Set up the DataNode zones.     Configure the NameNode.     Set up SSH.     Format HDFS from the NameNode.     Start the Hadoop cluster.     Run a MapReduce job.     Secure data at rest using ZFS encryption.     Use Oracle Solaris DTrace for performance monitoring.  Read it now

    Read the article

  • External TV only allows 640x480 using VGA cable

    - by EdC
    I have Ubuntu precise installed on a MacBook Pro. When connecting to an external television using a VGA style cable the highest resolution available seems to be 640x480. I'm using the proprietary NVIDEA Accelerated Graphics Driver (version current) and attempting to setup the TV using the nvidea-settings tool. I know all the equipment supports higher resolutions because of the following: Prior to rebuilding my machine with precise, the same TV, cabel & adapter gave higher resolutions The same machine & TV when used with a HDMI cable allows higher resolutions Windows laptops with the same cable & TV are able to achieve higher resolutions I'm not sure what to look for next, there are some similar FAQ questions but I'm not sufficiently familiar with the areas involved to know if these are my issue or are safe to try. Specifically: colord problem & my monitor resolution 640*480 with no other options! - is the closest, but as I'm having no problems with the main laptop monitor I'm not sure the described symptoms apply. I can't work out from the answer or Bumblebee site how to determine if nvidia optimus applies to my laptop. Why isn't my monitor's native resolution appearing as an option? - suggests configuring X by hand, but doesn't entirely match the symptoms I'm seeing

    Read the article

  • Start building websites using Java

    - by Alex coady
    I'm a web developer and I've used PHP, along with MySQL, Javascript, jQuery etc; all of which I'm very confident with and can use on a professional level. What I'm wanting to do however, is to start using Java instead of PHP as I'm creating sites that need to be scalable. I really need a starting point. I can program in Java, I understand the language but have never used it for the web. Links, tutorials, examples would be a great help. I have also considered using Scala (the language Twitter is written in).. but don't really understand the benefits. Please no jargon; I need clear information that essentially takes me through the process of creating a very simple website using Java; I'm more than capable of building the site once that's done.. I think. I've previously used Eclipse so if assume I can. Thanks in advance!

    Read the article

  • How to start WebLogic Server using default scripts?

    - by Luz Mestre-Oracle
    There are a few common issues reported when starting weblogic server using scripts. 1. User is not able to access weblogic console. 2. After a few days/hours weblogic server stops abruptly. 3. When user closes putty, they are not able to connect to weblogic server anymore. 4. When user closes windows command prompt, they are not able to connect to weblogic server anymore. 5. Weblogic is started using startManagedWebLogic.cmd/startManagedWebLogic.sh. By default, WebLogic Server does not run in background mode, so after you close the window the process finishes as well. In Linux/Unix based platforms, you need to use: nohup ./startManagedWebLogic.sh <Server> <URL> & In Windows platforms, you need to start Managed Servers using Windows Services: How to Install MS Windows Services For FMW 11g WebLogic Domain Admin and Managed Servers (Doc ID 1060058.1) http://docs.oracle.com/cd/E23943_01/web.1111/e13708/winservice.htm There a few more reasons that could cause similar symptoms, like JVM crash, signals sent by the Operating System, and many other reasons.  But the above steps is the first one to start. Enjoy!

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Building My First JavaFX Application Using Netbeans 7.1

    - by javafx4you
    Angela Caicedo, Oracle Technology Evangelist for JavaFX, has released a series of videos demonstrating how to build a simple JavaFX application using Netbeans 7.1. This video series is a great introduction to using JavaFX and takes the viewer through easy to follow, step-by-step instructions, including example code and showing the results along the way. These videos are highly recommend to anyone who is new to JavaFX and is looking for a quick getting started guide.  The first video provides an introduction to the the application and shows viewers the end result of the exercises that will be demonstrated throughout the series. The second video (Part 1) demonstrates how to get started creating the application from an empty project to build your first JavaFX application in Netbeans 7.1. Instructions include defining the components, creating and inserting image packages, adding the resources you want to make available in your application, setting the clipping area for you image, and setting the style for your image to create a transparent window. The third video (Part 2) demonstrates how to handle events and binding in the sample application using JavaFX. Watch the first 3 episodes now and stay tuned for future installments in this series on the Java YouTube channel.

    Read the article

  • Very slow wireless connection (unrelated to power management) using Netgear N150 WNA1100 in 12.04

    - by vgaldikas
    I am using a Netgear N150 WNA1100 USB Adapter. The network driver is: ath9k_htc. When I use Ubuntu 12.04 my wireless connection is much slower than when I use the WNA1100 with my Windows 7 installation on the same machine. I have been researching this and the most common reason for this seems to be related to power management. However for me this is off by default. I had entered a bunch of different commands and, at one stage, I got it to work at full power. But then I rebooted the machine and it went back to slow again. After entering the different commands I was afraid that I may have messed something up, so I performed a fresh install of Ubuntu. I still have the same problem. My wireless connection is still much slower when using Ubuntu than when using Windows 7. I am not sure what other info could help... EDIT: Ok, I have made live usb from the iso file I used to install Ubuntu on this pc, and ran it on my laptop. Still the same problem. However la[top works grand on cable. I suppose this means that my wireless adapter or network driver can be eliminated as root of the problem.. So maybe to do with router itself? I use ZyXEL P-660HW-D1

    Read the article

  • VCS for single user using file sync service

    - by StackUnder
    I'm trying to setup a version control for my one man project. My project files are in sync thanks to live mesh (but I could be using dropbox for that matter), between my laptop, my home pc and my office pc. I'm now using Netbeans with local file history. Sometimes it helps to revert to a previous state of one file. But imagine a situation when multiple files have problems. Correct me if I'm wrong but I would have to go to every file and revert to previous "safe" state. I don't like this approach, so I'm considering using a version control between SVN and GIT. I have some previous experience with SVN (TortoiseSVN) and I know that I can create a file:// repo. So, what a want to do is setup a VCS inside my synced folder just to have the ability to "revert" to a previous version if something goes wrong. Since everything's been synced to all computers, I wouldn't ever need to run an update. The file tree organization would be the following: C:...\SyncedFolder\MyProject\ Inside MyProject folder are all the project files plus a directory that has SVN or GIT info of my project (the repo/master). What VCS is best for this situation: SVN or GIT? Does SVN need to store all files from HEAD revision, thus "duplicating" all my project inside my synced folder? Does GIT eliminates this problem? Is this the best approach?

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • Beware when using .NET's named pipes in a windows forms application

    - by FransBouma
    Yesterday a user of our .net ORM Profiler tool reported that he couldn't get the snapshot recording from code feature working in a windows forms application. Snapshot recording in code means you start recording profile data from within the profiled application, and after you're done you save the snapshot as a file which you can open in the profiler UI. When using a console application it worked, but when a windows forms application was used, the snapshot was always empty: nothing was recorded. Obviously, I wondered why that was, and debugged a little. Here's an example piece of code to record the snapshot. This piece of code works OK in a console application, but results in an empty snapshot in a windows forms application: var snapshot = new Snapshot(); snapshot.Record(); using(var ctx = new ORMProfilerTestDataContext()) { var customers = ctx.Customers.Where(c => c.Country == "USA").ToList(); } InterceptorCore.Flush(); snapshot.Stop(); string error=string.Empty; if(!snapshot.IsEmpty) { snapshot.SaveToFile(@"c:\temp\generatortest\test2\blaat.opsnapshot", out error); } if(!string.IsNullOrEmpty(error)) { Console.WriteLine("Save error: {0}", error); } (the Console.WriteLine doesn't do anything in a windows forms application, but you get the idea). ORM Profiler uses named pipes: the interceptor (referenced and initialized in your application, the application to profile) sends data over the named pipe to a listener, which when receiving a piece of data begins reading it, asynchronically, and when properly read, it will signal observers that new data has arrived so they can store it in a repository. In this case, the snapshot will be the observer and will store the data in its own repository. The reason the above code doesn't work in windows forms is because windows forms is a wrapper around Win32 and its WM_* message based system. Named pipes in .NET are wrappers around Windows named pipes which also work with WM_* messages. Even though we use BeginRead() on the named pipe (which spawns a thread to read the data from the named pipe), nothing is received by the named pipe in the windows forms application, because it doesn't handle the WM_* messages in its message queue till after the method is over, as the message pump of a windows forms application is handled by the only thread of the windows forms application, so it will handle WM_* messages when the application idles. The fix is easy though: add Application.DoEvents(); right before snapshot.Stop(). Application.DoEvents() forces the windows forms application to process all WM_* messages in its message queue at that moment: all messages for the named pipe are then handled, the .NET code of the named pipe wrapper will react on that and the whole process will complete as if nothing happened. It's not that simple to just say 'why didn't you use a worker thread to create the snapshot here?', because a thread doesn't get its own message pump: the messages would still be posted to the window's message pump. A hidden form would create its own message pump, so the additional thread should also create a window to get the WM_* messages of the named pipe posted to a different message pump than the one of the main window. This WM_* messages pain is not something you want to be confronted with when using .NET and its libraries. Unfortunately, the way they're implemented, a lot of APIs are leaky abstractions, they bleed the characteristics of the OS objects they hide away through to the .NET code. Be aware of that fact when using them :)

    Read the article

  • Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark (TOTD #184)

    - by arungupta
    TOTD #183 explained how to build a WebSocket-driven application using GlassFish 4. This Tip Of The Day (TOTD) will explain how do view/debug on-the-wire messages, or frames as they are called in WebSocket parlance, over this upgraded connection. This blog will use the application built in TOTD #183. First of all, make sure you are using a browser that supports WebSocket. If you recall from TOTD #183 then WebSocket is combination of Protocol and JavaScript API. A browser supporting WebSocket, or not, means they understand your web pages with the WebSocket JavaScript. caniuse.com/websockets provide a current status of WebSocket support in different browsers. Most of the major browsers such as Chrome, Firefox, Safari already support WebSocket for the past few versions. As of this writing, IE still does not support WebSocket however its planned for a future release. Viewing WebSocket farmes require special settings because all the communication happens over an upgraded HTTP connection over a single TCP connection. If you are building your application using Java, then there are two common ways to debug WebSocket messages today. Other language libraries provide different mechanisms to log the messages. Lets get started! Chrome Developer Tools provide information about the initial handshake only. This can be viewed in the Network tab and selecting the endpoint hosting the WebSocket endpoint. You can also click on "WebSockets" on the bottom-right to show only the WebSocket endpoints. Click on "Frames" in the right panel to view the actual frames being exchanged between the client and server. The frames are not refreshed when new messages are sent or received. You need to refresh the panel by clicking on the endpoint again. To see more detailed information about the WebSocket frames, you need to type "chrome://net-internals" in a new tab. Click on "Sockets" in the left navigation bar and then on "View live sockets" to see the page. Select the box with the address to your WebSocket endpoint and see some basic information about connection and bytes exchanged between the client and the endpoint. Clicking on the blue text "source dependency ..." shows more details about the handshake. If you are interested in viewing the exact payload of WebSocket messages then you need a network sniffer. These tools are used to snoop network traffic and provide a lot more details about the raw messages exchanged over the network. However because they provide lot more information so they need to be configured in order to view the relevant information. Wireshark (nee Ethereal) is a pretty standard tool for sniffing network traffic and will be used here. For this blog purpose, we'll assume that the WebSocket endpoint is hosted on the local machine. These tools do allow to sniff traffic across the network though. Wireshark is quite a comprehensive tool and we'll capture traffic on the loopback address. Start wireshark, select "loopback" and click on "Start". By default, all traffic information on the loopback address is displayed. That includes tons of TCP protocol messages, applications running on your local machines (like GlassFish or Dropbox on mine), and many others. Specify "http" as the filter in the top-left. Invoke the application built in TOTD #183 and click on "Say Hello" button once. The output in wireshark looks like Here is a description of the messages exchanged: Message #4: Initial HTTP request of the JSP page Message #6: Response returning the JSP page Message #16: HTTP Upgrade request Message #18: Upgrade request accepted Message #20: Request favicon Message #22: Responding with favicon not found Message #24: Browser making a WebSocket request to the endpoint Message #26: WebSocket endpoint responding back You can also use Fiddler to debug your WebSocket messages. How are you viewing your WebSocket messages ? Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Building Extensions Using E-Business Suite SDK for Java

    - by Sara Woodhull
    We’ve just released Version 2.0.1 of Oracle E-Business Suite SDK for Java.  This new version has several great enhancements added after I wrote about the first version of the SDK in 2010.  In addition to the AppsDataSource and Java Authentication and Authorization Service (JAAS) features that are in the first version, the Oracle E-Business Suite SDK for Java now provides: Session management APIs, so you can share session information with Oracle E-Business Suite Setup script for UNIX/Linux for AppsDataSource and JAAS on Oracle WebLogic Server APIs for Message Dictionary, User Profiles, and NLS Javadoc for the APIs (included with the patch) Enhanced documentation included with Note 974949.1 These features can be used with either Release 11i or Release 12.  References AppsDataSource, Java Authentication and Authorization Service, and Utilities for Oracle E-Business Suite (Note 974949.1) FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications (Doc ID 1296491.1) What's new in those references? Note 974949.1 is the place to look for the latest information as we come out with new versions of the SDK.  The patch number changes for each release.  Version 2.0.1 is contained in Patch 13882058, which is for both Release 11i and Release 12.  Note 974949.1 includes the following topics: Applying the latest patch Using Oracle E-Business Suite Data Sources Oracle E-Business Suite Implementation of Java Authentication and Authorization Service (JAAS) Utilities Error loggingSession management  Message Dictionary User profiles Navigation to External Applications Java EE Session Management Tutorial For those of you using the SDK with Oracle ADF, besides some Oracle ADF-specific documentation in Note 974949.1, we also updated the ADF Integration FAQ as well. EBS SDK for Java Use Cases The uses of the Oracle E-Business Suite SDK for Java fall into two general scenarios for integrating external applications with Oracle E-Business Suite: Application sharing a session with Oracle E-Business Suite Independent application (not shared session) With an independent application, the external application accesses Oracle E-Business  Suite data and server-side APIs, but it has a completely separate user interface. The external application may also launch pages from the Oracle E-Business Suite home page, but after the initial launch there is no further communication with the Oracle E-Business Suite user interface. Shared session integration means that the external application uses an Oracle E-Business Suite session (ICX session), shares session context information with Oracle E-Business Suite, and accesses Oracle E-Business Suite data. The external application may also launch pages from the Oracle E-Business Suite home page, or regions or pages from the external application may be embedded as regions within Oracle Application Framework pages. Both shared session applications and independent applications use the AppsDataSource feature of the Oracle E-Business Suite SDK for Java. Independent applications may also use the Java Authentication and Authorization (JAAS) and logging features of the SDK. Applications that are sharing the Oracle E-Business Suite session use the session management feature (instead of the JAAS feature), and they may also use the logging, profiles, and Message Dictionary features of the SDK.  The session management APIs allow you to create, retrieve, validate and cancel an Oracle E-Business Suite session (ICX session) from your external application.  Session information and context can travel back and forth between Oracle E-Business Suite and your application, allowing you to share session context information across applications. Note: Generally you would use the Java Authentication and Authorization (JAAS) feature of the SDK or the session management feature, but not both together. Send us your feedback Since the Oracle E-Business Suite SDK for Java is still pretty new, we’d like to know about who is using it and what you are trying to do with it.  We’d like to get this type of information: customer name and brief use case configuration and technologies (Oracle WebLogic Server or OC4J, plain Java, ADF, SOA Suite, and so on) project status (proof of concept, development, production) any other feedback you have about the SDK You can send me your feedback directly at Sara dot Woodhull at Oracle dot com, or you can leave it in the comments below.  Please keep in mind that we cannot answer support questions, so if you are having specific issues, please log a service request with Oracle Support. Happy coding! Related Articles New Whitepaper: Extending E-Business Suite 12.1.3 using Oracle Application Express To Customize or Not to Customize? New Whitepaper: Upgrading your Customizations to Oracle E-Business Suite Release 12 ATG Live Webcast: Upgrading your EBS 11i Customizations to Release 12

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • How do I rotate my monitor using xorg?

    - by user1106405
    I have just installed KUbuntu 12.10, and I am attempting to rotate my monitor 90 deg to the left. When I add the option to rotate, the monitor seems to ignore the directive. I'm currently using dual 02:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) and NVidia driver version 310 My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 304.51 (buildd@komainu) Fri Oct 12 12:53:49 UTC 2012 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 310.14 ([email protected]) Tue Oct 9 13:04:01 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 1280 0 Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 60.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1908WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:2:0:0" Screen 1 EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1200 +0+0; DFP-0: 1920x1200_60 +0+0; DFP-0: 1600x1200 +0+0; DFP-0: 1600x1200_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select @1920x1080 +0+0; DFP-0: nvidia-auto-select @1920x720 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1200 +0+0; DFP-0: 1920x1200_60 +0+0; DFP-0: 1600x1200 +0+0; DFP-0: 1600x1200_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" Option "Rotate" "left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Enable" EndSection Edit: If I delete the xorg.conf and reboot, I am able to rotate my monitor, however, my third monitor is not recognized: Screen 0: minimum 8 x 8, current 3360 x 1200, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 connected 1920x1200+0+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 60.0*+ 1600x1200 60.0 1280x1024 60.0 1280x960 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 connected 1440x900+1920+0 (normal left inverted right x axis y axis) 408mm x 255mm 1440x900 59.9*+ 75.0 1280x1024 75.0 60.0 1280x800 59.8 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 640x480 75.0 72.8 59.9

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 1

    - by rajbk
    This tutorial walks you through creating an report based on the Northwind sample database. You will add a client report definition file (RDLC), create a dataset for the RDLC, define queries using LINQ to Entities, design the report and add a ReportViewer web control to render the report in a ASP.NET web page. The report will have a chart control. Different results will be generated by changing filter criteria. At the end of the walkthrough, you should have a UI like the following.  From the UI below, a user is able to view the product list and can see a chart with the sum of Unit price for a given category. They can filter by Category and Supplier. The drop downs will auto post back when the selection is changed.  This demo uses Visual Studio 2010 RTM. This post is split into three parts. The last part has the sample code attached. Creating an ASP.NET report using Visual Studio 2010 - Part 2 Creating an ASP.NET report using Visual Studio 2010 - Part 3   Lets start by creating a new ASP.NET empty web application called “NorthwindReports” Creating the Data Access Layer (DAL) Add a web form called index.aspx to the root directory. You do this by right clicking on the NorthwindReports web project and selecting “Add item..” . Create a folder called “DAL”. We will store all our data access methods and any data transfer objects in here.   Right click on the DAL folder and add a ADO.NET Entity data model called Northwind. Select “Generate from database” and click Next. Create a connection to your database containing the Northwind sample database and click Next.   From the table list, select Categories, Products and Suppliers and click next. Our Entity data model gets created and looks like this:    Adding data transfer objects Right click on the DAL folder and add a ProductViewModel. Add the following code. This class contains properties we need to render our report. public class ProductViewModel { public int? ProductID { get; set; } public string ProductName { get; set; } public System.Nullable<decimal> UnitPrice { get; set; } public string CategoryName { get; set; } public int? CategoryID { get; set; } public int? SupplierID { get; set; } public bool Discontinued { get; set; } } Add a SupplierViewModel class. This will be used to render the supplier DropDownlist. public class SupplierViewModel { public string CompanyName { get; set; } public int SupplierID { get; set; } } Add a CategoryViewModel class. public class CategoryViewModel { public string CategoryName { get; set; } public int CategoryID { get; set; } } Create an IProductRepository interface. This will contain the signatures of all the methods we need when accessing the entity model.  This step is not needed but follows the repository pattern. interface IProductRepository { IQueryable<Product> GetProducts(); IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID); IQueryable<SupplierViewModel> GetSuppliers(); IQueryable<CategoryViewModel> GetCategories(); } Create a ProductRepository class that implements the IProductReposity above. The methods available in this class are as follows: GetProducts – returns an IQueryable of all products. GetProductsProjected – returns an IQueryable of ProductViewModel. The method filters all the products based on SupplierId and CategoryId if any. It then projects the result into the ProductViewModel. GetSuppliers() – returns an IQueryable of all suppliers projected into a SupplierViewModel GetCategories() – returns an IQueryable of all categories projected into a CategoryViewModel  public class ProductRepository : IProductRepository { /// <summary> /// IQueryable of all Products /// </summary> /// <returns></returns> public IQueryable<Product> GetProducts() { var dataContext = new NorthwindEntities(); var products = from p in dataContext.Products select p; return products; }   /// <summary> /// IQueryable of Projects projected /// into the ProductViewModel class /// </summary> /// <returns></returns> public IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID) { var projectedProducts = from p in GetProducts() select new ProductViewModel { ProductID = p.ProductID, ProductName = p.ProductName, UnitPrice = p.UnitPrice, CategoryName = p.Category.CategoryName, CategoryID = p.CategoryID, SupplierID = p.SupplierID, Discontinued = p.Discontinued }; // Filter on SupplierID if (supplierID.HasValue) { projectedProducts = projectedProducts.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { projectedProducts = projectedProducts.Where(a => a.CategoryID == categoryID); }   return projectedProducts; }     public IQueryable<SupplierViewModel> GetSuppliers() { var dataContext = new NorthwindEntities(); var suppliers = from s in dataContext.Suppliers select new SupplierViewModel { SupplierID = s.SupplierID, CompanyName = s.CompanyName }; return suppliers; }   public IQueryable<CategoryViewModel> GetCategories() { var dataContext = new NorthwindEntities(); var categories = from c in dataContext.Categories select new CategoryViewModel { CategoryID = c.CategoryID, CategoryName = c.CategoryName }; return categories; } } Your solution explorer should look like the following. Build your project and make sure you don’t get any errors. In the next part, we will see how to create the client report definition file using the Report Wizard.   Creating an ASP.NET report using Visual Studio 2010 - Part 2

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >