Search Results

Search found 5044 results on 202 pages for 'clausal logic'.

Page 173/202 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • How to make Nginx fire 504 immediately is server is not available?

    - by Georgiy Ivankin
    I have Nginx set up as a load balancer with cookie-based stickiness. The logic is: If the cookie is NOT there, use round-robbing to choose a server from cluster. If the cookie is there, go to the server that is associated with the cookie value. Server is then responsible for setting the cookie. What I want to add is this: If the cookie is there, but server is down, fallback to round-robbing step to choose next available server. So actually I have load balancing and want to add failover support on top of it. I have managed to do that with the help of error_page directive, but it doesn't work as I expected it to. The problem: 504 (and the fallback associated with it) fires only after 30s timeout even if the server is not physically available. So what I want Nginx to do is fire a 504 (or any other error, doesn't matter) immediately (I suppose this means: when TCP connection fails). This is the behavior we can see in browsers: if we go directly to server when it is down, browser immediately tells us that it can't connect. Moreover, Nginx seems to be doing this for 502 error: if I intentionally misconfigure my servers, Nginx fires 502 immediately. Configuration (stripped down to basics): http { upstream my_cluster { server 192.168.73.210:1337; server 192.168.73.210:1338; } map $cookie_myCookie $http_sticky_backend { default 0; value1 192.168.73.210:1337; value2 192.168.73.210:1338; } server { listen 8080; location @fallback { proxy_pass http://my_cluster; } location / { error_page 504 = @fallback; # Create a map of choices # see https://gist.github.com/jrom/1760790 set $test HTTP; if ($http_sticky_backend) { set $test "${test}-STICKY"; } if ($test = HTTP-STICKY) { proxy_pass http://$http_sticky_backend$uri?$args; break; } if ($test = HTTP) { proxy_pass http://my_cluster; break; } return 500 "Misconfiguration"; } } } Disclaimer: I am pretty far from systems administration of any kind, so there may be some basics that I miss here. EDIT: I'm interested in solution with standard free version of Nginx, not Nginx Plus. Thanks.

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • ASP.NET Podcast Show #148 - ASP.NET WebForms to build a Mobile Web Application

    - by Wallym
    Check the podcast site for the original url. This is the video and source code for an ASP.NET WebForms app that I wrote that is optimized for the iPhone and mobile environments.  Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download M4V for iPhone/iPad. Download MP3. Link to iWebKit. Source Code: <%@ Page Title="MapSplore" Language="C#" MasterPageFile="iPhoneMaster.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="AT_iPhone_Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="Content" Runat="Server" ClientIDMode="Static">    <asp:ScriptManager ID="sm" runat="server"         EnablePartialRendering="true" EnableHistory="false" EnableCdn="true" />    <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>    <script  language="javascript"  type="text/javascript">    <!--    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequestHandle);    function endRequestHandle(sender, Args) {        setupMapDiv();        setupPlaceIveBeen();    }    function setupPlaceIveBeen() {        var mapPlaceIveBeen = document.getElementById('divPlaceIveBeen');        if (mapPlaceIveBeen != null) {            var PlaceLat = document.getElementById('<%=hdPlaceIveBeenLatitude.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceIveBeenLongitude.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=lblPlaceIveBeenName.ClientID %>').innerHTML;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapPlaceIveBeen, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }    }    function setupMapDiv() {        var mapdiv = document.getElementById('divImHere');        if (mapdiv != null) {            var PlaceLat = document.getElementById('<%=hdPlaceLat.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceLon.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=hdPlaceTitle.ClientID %>').value;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapdiv, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }     }    -->    </script>    <asp:HiddenField ID="Latitude" runat="server" />    <asp:HiddenField ID="Longitude" runat="server" />    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js%22%3E%3C/script>    <script language="javascript" type="text/javascript">        $(document).ready(function () {            GetLocation();            setupMapDiv();            setupPlaceIveBeen();        });        function GetLocation() {            if (navigator.geolocation != null) {                navigator.geolocation.getCurrentPosition(getData);            }            else {                var mess = document.getElementById('<%=Message.ClientID %>');                mess.innerHTML = "Sorry, your browser does not support geolocation. " +                    "Try the latest version of Safari on the iPhone, Android browser, or the latest version of FireFox.";            }        }        function UpdateLocation_Click() {            GetLocation();        }        function getData(position) {            var latitude = position.coords.latitude;            var longitude = position.coords.longitude;            var hdLat = document.getElementById('<%=Latitude.ClientID %>');            var hdLon = document.getElementById('<%=Longitude.ClientID %>');            hdLat.value = latitude;            hdLon.value = longitude;        }    </script>    <asp:Label ID="Message" runat="server" />    <asp:UpdatePanel ID="upl" runat="server">        <ContentTemplate>    <asp:Panel ID="pnlStart" runat="server" Visible="true">    <div id="topbar">        <div id="title">MapSplore</div>    </div>    <div id="content">        <ul class="pageitem">            <li class="menu">                <asp:LinkButton ID="lbLocalDeals" runat="server" onclick="lbLocalDeals_Click">                <asp:Image ID="imLocalDeals" runat="server" ImageUrl="~/Images/ArtFavor_Money_Bag_Icon.png" Height="30" />                <span class="name">Local Deals.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbLocalPlaces" runat="server" onclick="lbLocalPlaces_Click">                <asp:Image ID="imLocalPlaces" runat="server" ImageUrl="~/Images/Andy_Houses_on_the_horizon_-_Starburst_remix.png" Height="30" />                <span class="name">Local Places.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbWhereIveBeen" runat="server" onclick="lbWhereIveBeen_Click">                <asp:Image ID="imImHere" runat="server" ImageUrl="~/Images/ryanlerch_flagpole.png" Height="30" />                <span class="name">I've been here.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbMyStats" runat="server">                <asp:Image ID="imMyStats" runat="server" ImageUrl="~/Images/Anonymous_Spreadsheet.png" Height="30" />                <span class="name">My Stats.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbAddAPlace" runat="server" onclick="lbAddAPlace_Click">                <asp:Image ID="imAddAPlace" runat="server" ImageUrl="~/Images/jean_victor_balin_add.png" Height="30" />                <span class="name">Add a Place.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="button">                <input type="button" value="Update Your Current Location" onclick="UpdateLocation_Click()">                </li>        </ul>    </div>    </asp:Panel>    <div>    <asp:Panel ID="pnlCoupons" runat="server" Visible="false">        <div id="topbar">        <div id="title">MapSplore</div>        <div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromDeals" OnClick="ReturnFromDeals_Click" /></div></div>    <div class="content">    <asp:ListView ID="lvCoupons" runat="server">        <LayoutTemplate>            <ul class="pageitem" runat="server">                <asp:PlaceHolder ID="itemPlaceholder" runat="server" />            </ul>        </LayoutTemplate>        <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Place.Name") %>' OnClick="lbBusiness_Click">                    <span class="comment">                    <asp:Label ID="lblAddress" runat="server" Text='<%#Eval("Place.Address1") %>' />                    <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Place.Distance"))) + " meters" %>' CssClass="smallText" />                    <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                    <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                    </span>                    <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView><asp:GridView ID="gvCoupons" runat="server" AutoGenerateColumns="false">            <HeaderStyle BackColor="Silver" />            <AlternatingRowStyle BackColor="Wheat" />            <Columns>                <asp:TemplateField AccessibleHeaderText="Business" HeaderText="Business">                    <ItemTemplate>                        <asp:Image ID="imPlaceType" runat="server" Text='<%#Eval("Type") %>' ImageUrl='<%#Eval("Image") %>' />                        <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Name") %>' OnClick="lbBusiness_Click" />                        <asp:LinkButton ID="lblAddress" runat="server" Text='<%#Eval("Address1") %>' CssClass="smallText" />                        <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>' CssClass="smallText" />                        <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                        <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                        <asp:Label ID="lblInfo" runat="server" Visible="false" />                    </ItemTemplate>                </asp:TemplateField>            </Columns>        </asp:GridView>    </div>    </asp:Panel>    <asp:Panel ID="pnlPlaces" runat="server" Visible="false">    <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromPlaces" OnClick="ReturnFromPlaces_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvPlaces" runat="server">            <LayoutTemplate>                <ul id="ulPlaces" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                    <li class="menu">                        <asp:LinkButton ID="lbNotListed" runat="server" CssClass="name"                            OnClick="lbNotListed_Click">                            Place not listed                            <span class="arrow"></span>                            </asp:LinkButton>                    </li>                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbImHere" runat="server" CssClass="name"                     OnClick="lbImHere_Click">                <%#DisplayName(Eval("Name")) %>&nbsp;                <%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>                <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView>    </div>    </asp:Panel>    <asp:Panel ID="pnlImHereNow" runat="server" Visible="false">        <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Places"                 ID="lbImHereNowReturn" OnClick="lbImHereNowReturn_Click" /></div></div>            <div id="rightbutton">            <asp:LinkButton runat="server" Text="Beginning"                ID="lbBackToBeginning" OnClick="lbBackToBeginning_Click" />            </div>        <div id="content">        <ul class="pageitem">        <asp:HiddenField ID="hdPlaceId" runat="server" />        <asp:HiddenField ID="hdPlaceLat" runat="server" />        <asp:HiddenField ID="hdPlaceLon" runat="server" />        <asp:HiddenField ID="hdPlaceTitle" runat="server" />        <asp:Button ID="btnImHereNow" runat="server"             Text="I'm here" OnClick="btnImHereNow_Click" />             <asp:Label ID="lblPlaceTitle" runat="server" /><br />        <asp:TextBox ID="txtWhatsHappening" runat="server" TextMode="MultiLine" Rows="2" style="width:300px" /><br />        <div id="divImHere" style="width:300px; height:300px"></div>        </div>        </ul>    </asp:Panel>    <asp:Panel runat="server" ID="pnlIveBeenHere" Visible="false">        <div id="topbar">        <div id="title">            Where I've been</div><div id="leftbutton">            <asp:LinkButton ID="lbIveBeenHereBack" runat="server" Text="Back" OnClick="lbIveBeenHereBack_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvWhereIveBeen" runat="server">            <LayoutTemplate>                <ul id="ulWhereIveBeen" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu" runat="server">                <asp:LinkButton ID="lbPlaceIveBeen" runat="server" OnClick="lbPlaceIveBeen_Click" CssClass="name">                    <asp:Label ID="lblPlace" runat="server" Text='<%#Eval("PlaceName") %>' /> at                    <asp:Label ID="lblTime" runat="server" Text='<%#Eval("ATTime") %>' CssClass="content" />                    <asp:HiddenField ID="hdATID" runat="server" Value='<%#Eval("ATID") %>' />                    <span class="arrow"></span>                </asp:LinkButton>            </li>            </ItemTemplate>        </asp:ListView>        </div>        </asp:Panel>    <asp:Panel runat="server" ID="pnlPlaceIveBeen" Visible="false">        <div id="topbar">        <div id="title">            I've been here        </div>        <div id="leftbutton">            <asp:LinkButton ID="lbPlaceIveBeenBack" runat="server" Text="Back" OnClick="lbPlaceIveBeenBack_Click" />        </div>        <div id="rightbutton">            <asp:LinkButton ID="lbPlaceIveBeenBeginning" runat="server" Text="Beginning" OnClick="lbPlaceIveBeenBeginning_Click" />        </div>        </div>        <div id="content">            <ul class="pageitem">            <li>            <asp:HiddenField ID="hdPlaceIveBeenPlaceId" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLatitude" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLongitude" runat="server" />            <asp:Label ID="lblPlaceIveBeenName" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenAddress" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCity" runat="server" />,             <asp:Label ID="lblPlaceIveBeenState" runat="server" />            <asp:Label ID="lblPlaceIveBeenZipCode" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCountry" runat="server" /><br />            <div id="divPlaceIveBeen" style="width:300px; height:300px"></div>            </li>            </ul>        </div>                </asp:Panel>         <asp:Panel ID="pnlAddPlace" runat="server" Visible="false">                <div id="topbar"><div id="title">MapSplore</div><div id="leftbutton"><asp:LinkButton ID="lbAddPlaceReturn" runat="server" Text="Back" OnClick="lbAddPlaceReturn_Click" /></div><div id="rightnav"></div></div><div id="content">    <ul class="pageitem">        <li id="liPlaceAddMessage" runat="server" visible="false">        <asp:Label ID="PlaceAddMessage" runat="server" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtPlaceName" runat="server" placeholder="Name of Establishment" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtAddress1" runat="server" placeholder="Address 1" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtCity" runat="server" placeholder="City" />        </li>        <li class="select">        <asp:DropDownList ID="ddlProvince" runat="server" placeholder="Select State" />          <span class="arrow"></span>              </li>        <li class="bigfield">        <asp:TextBox ID="txtZipCode" runat="server" placeholder="Zip Code" />        </li>        <li class="select">        <asp:DropDownList ID="ddlCountry" runat="server"             onselectedindexchanged="ddlCountry_SelectedIndexChanged" />        <span class="arrow"></span>        </li>        <li class="bigfield">        <asp:TextBox ID="txtPhoneNumber" runat="server" placeholder="Phone Number" />        </li>        <li class="checkbox">            <span class="name">You Here Now:</span> <asp:CheckBox ID="cbYouHereNow" runat="server" Checked="true" />        </li>        <li class="button">        <asp:Button ID="btnAdd" runat="server" Text="Add Place"             onclick="btnAdd_Click" />        </li>    </ul></div>        </asp:Panel>        <asp:Panel ID="pnlImHere" runat="server" Visible="false">            <asp:TextBox ID="txtImHere" runat="server"                 TextMode="MultiLine" Rows="3" Columns="40" /><br />            <asp:DropDownList ID="ddlPlace" runat="server" /><br />            <asp:Button ID="btnHere" runat="server" Text="Tell Everyone I'm Here"                 onclick="btnHere_Click" /><br />        </asp:Panel>     </div>    </ContentTemplate>    </asp:UpdatePanel> </asp:Content> Code Behind .cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using LocationDataModel; public partial class AT_iPhone_Default : ViewStatePage{    private iPhoneDevice ipd;     protected void Page_Load(object sender, EventArgs e)    {        LocationDataEntities lde = new LocationDataEntities();        if (!Page.IsPostBack)        {            var Countries = from c in lde.Countries select c;            foreach (Country co in Countries)            {                ddlCountry.Items.Add(new ListItem(co.Name, co.CountryId.ToString()));            }            ddlCountry_SelectedIndexChanged(ddlCountry, null);            if (AppleIPhone.IsIPad())                ipd = iPhoneDevice.iPad;            if (AppleIPhone.IsIPhone())                ipd = iPhoneDevice.iPhone;            if (AppleIPhone.IsIPodTouch())                ipd = iPhoneDevice.iPodTouch;        }    }    protected void btnPlaces_Click(object sender, EventArgs e)    {    }    protected void btnAdd_Click(object sender, EventArgs e)    {        bool blImHere = cbYouHereNow.Checked;        string Place = txtPlaceName.Text,            Address1 = txtAddress1.Text,            City = txtCity.Text,            ZipCode = txtZipCode.Text,            PhoneNumber = txtPhoneNumber.Text,            ProvinceId = ddlProvince.SelectedItem.Value,            CountryId = ddlCountry.SelectedItem.Value;        int iProvinceId, iCountryId;        double dLatitude, dLongitude;        DataAccess da = new DataAccess();        if ((!String.IsNullOrEmpty(ProvinceId)) &&            (!String.IsNullOrEmpty(CountryId)))        {            iProvinceId = Convert.ToInt32(ProvinceId);            iCountryId = Convert.ToInt32(CountryId);            if (blImHere)            {                dLatitude = Convert.ToDouble(Latitude.Value);                dLongitude = Convert.ToDouble(Longitude.Value);                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber,                    dLatitude, dLongitude);            }            else            {                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber);            }            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Awesome, your place has been added. Add Another!";            txtPlaceName.Text = String.Empty;            txtAddress1.Text = String.Empty;            txtCity.Text = String.Empty;            ddlProvince.SelectedIndex = -1;            txtZipCode.Text = String.Empty;            txtPhoneNumber.Text = String.Empty;        }        else        {            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Please select a State and a Country.";        }    }    protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e)    {        string CountryId = ddlCountry.SelectedItem.Value;        if (!String.IsNullOrEmpty(CountryId))        {            int iCountryId = Convert.ToInt32(CountryId);            LocationDataModel.LocationDataEntities lde = new LocationDataModel.LocationDataEntities();            var prov = from p in lde.Provinces where p.CountryId == iCountryId                        orderby p.ProvinceName select p;                        ddlProvince.Items.Add(String.Empty);            foreach (Province pr in prov)            {                ddlProvince.Items.Add(new ListItem(pr.ProvinceName, pr.ProvinceId.ToString()));            }        }        else        {            ddlProvince.Items.Clear();        }    }    protected void btnImHere_Click(object sender, EventArgs e)    {        int i = 0;        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value),            Lon = Convert.ToDouble(Longitude.Value);        List<Place> lp = da.NearByLocations(Lat, Lon);        foreach (Place p in lp)        {            ListItem li = new ListItem(p.Name, p.PlaceId.ToString());            if (i == 0)            {                li.Selected = true;            }            ddlPlace.Items.Add(li);            i++;        }        pnlAddPlace.Visible = false;        pnlImHere.Visible = true;    }    protected void lbImHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        ListViewItem lvi = (ListViewItem)(((LinkButton)sender).Parent);        HiddenField hd = (HiddenField)lvi.FindControl("hdPlaceId");        long PlaceId = Convert.ToInt64(hd.Value);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        Place pl = da.GetPlace(PlaceId);        pnlImHereNow.Visible = true;        pnlPlaces.Visible = false;        hdPlaceId.Value = PlaceId.ToString();        hdPlaceLat.Value = pl.Latitude.ToString();        hdPlaceLon.Value = pl.Longitude.ToString();        hdPlaceTitle.Value = pl.Name;        lblPlaceTitle.Text = pl.Name;    }    protected void btnHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        string WhatsH = txtImHere.Text;        long PlaceId = Convert.ToInt64(ddlPlace.SelectedValue);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsH,            dLatitude, dLongitude);    }    protected void btnLocalCoupons_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();     }    protected void lbBusiness_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        GridViewRow gvr = (GridViewRow)(((LinkButton)sender).Parent.Parent);        HiddenField hd = (HiddenField)gvr.FindControl("hdPlaceId");        string sPlaceId = hd.Value;        Int64 PlaceId;        if (!String.IsNullOrEmpty(sPlaceId))        {            PlaceId = Convert.ToInt64(sPlaceId);        }    }    protected void lbLocalDeals_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        pnlCoupons.Visible = true;        pnlStart.Visible = false;        List<GeoPromotion> lgp = da.NearByDeals(dLatitude, dLongitude);        lvCoupons.DataSource = lgp;        lvCoupons.DataBind();    }    protected void lbLocalPlaces_Click(object sender, EventArgs e)    {        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value);        double Lon = Convert.ToDouble(Longitude.Value);        List<LocationDataModel.Place> places = da.NearByLocations(Lat, Lon);        lvPlaces.DataSource = places;        lvPlaces.SelectedIndex = -1;        lvPlaces.DataBind();        pnlPlaces.Visible = true;        pnlStart.Visible = false;    }    protected void ReturnFromPlaces_Click(object sender, EventArgs e)    {        pnlPlaces.Visible = false;        pnlStart.Visible = true;    }    protected void ReturnFromDeals_Click(object sender, EventArgs e)    {        pnlCoupons.Visible = false;        pnlStart.Visible = true;    }    protected void btnImHereNow_Click(object sender, EventArgs e)    {        long PlaceId = Convert.ToInt32(hdPlaceId.Value);        string UserName = Membership.GetUser().UserName;        string WhatsHappening = txtWhatsHappening.Text;        double UserLat = Convert.ToDouble(Latitude.Value);        double UserLon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsHappening,             UserLat, UserLon);    }    protected void lbImHereNowReturn_Click(object sender, EventArgs e)    {        pnlImHereNow.Visible = false;        pnlPlaces.Visible = true;    }    protected void lbBackToBeginning_Click(object sender, EventArgs e)    {        pnlStart.Visible = true;        pnlImHereNow.Visible = false;    }    protected void lbWhereIveBeen_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        pnlStart.Visible = false;        pnlIveBeenHere.Visible = true;        DataAccess da = new DataAccess();        lvWhereIveBeen.DataSource = da.UserATs(UserName, 0, 15);        lvWhereIveBeen.DataBind();    }    protected void lbIveBeenHereBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = false;        pnlStart.Visible = true;    }     protected void lbPlaceIveBeen_Click(object sender, EventArgs e)    {        LinkButton lb = (LinkButton)sender;        ListViewItem lvi = (ListViewItem)lb.Parent.Parent;        HiddenField hdATID = (HiddenField)lvi.FindControl("hdATID");        Int64 ATID = Convert.ToInt64(hdATID.Value);        DataAccess da = new DataAccess();        pnlIveBeenHere.Visible = false;        pnlPlaceIveBeen.Visible = true;        var plac = da.GetPlaceViaATID(ATID);        hdPlaceIveBeenPlaceId.Value = plac.PlaceId.ToString();        hdPlaceIveBeenLatitude.Value = plac.Latitude.ToString();        hdPlaceIveBeenLongitude.Value = plac.Longitude.ToString();        lblPlaceIveBeenName.Text = plac.Name;        lblPlaceIveBeenAddress.Text = plac.Address1;        lblPlaceIveBeenCity.Text = plac.City;        lblPlaceIveBeenState.Text = plac.Province.ProvinceName;        lblPlaceIveBeenZipCode.Text = plac.ZipCode;        lblPlaceIveBeenCountry.Text = plac.Country.Name;    }     protected void lbNotListed_Click(object sender, EventArgs e)    {        SetupAddPoint();        pnlPlaces.Visible = false;    }     protected void lbAddAPlace_Click(object sender, EventArgs e)    {        SetupAddPoint();    }     private void SetupAddPoint()    {        double lat = Convert.ToDouble(Latitude.Value);        double lon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        var zip = da.WhereAmIAt(lat, lon);        if (zip.Count > 0)        {            var z0 = zip[0];            txtCity.Text = z0.City;            txtZipCode.Text = z0.ZipCode;            ddlProvince.ClearSelection();            if (z0.ProvinceId.HasValue == true)            {                foreach (ListItem li in ddlProvince.Items)                {                    if (li.Value == z0.ProvinceId.Value.ToString())                    {                        li.Selected = true;                        break;                    }                }            }        }        pnlAddPlace.Visible = true;        pnlStart.Visible = false;    }    protected void lbAddPlaceReturn_Click(object sender, EventArgs e)    {        pnlAddPlace.Visible = false;        pnlStart.Visible = true;        liPlaceAddMessage.Visible = false;        PlaceAddMessage.Text = String.Empty;    }    protected void lbPlaceIveBeenBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = true;        pnlPlaceIveBeen.Visible = false;            }    protected void lbPlaceIveBeenBeginning_Click(object sender, EventArgs e)    {        pnlPlaceIveBeen.Visible = false;        pnlStart.Visible = true;    }    protected string DisplayName(object val)    {        string strVal = Convert.ToString(val);         if (AppleIPhone.IsIPad())        {            ipd = iPhoneDevice.iPad;        }        if (AppleIPhone.IsIPhone())        {            ipd = iPhoneDevice.iPhone;        }        if (AppleIPhone.IsIPodTouch())        {            ipd = iPhoneDevice.iPodTouch;        }        return (iPhoneHelper.DisplayContentOnMenu(strVal, ipd));    }} iPhoneHelper.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; public enum iPhoneDevice{    iPhone, iPodTouch, iPad}/// <summary>/// Summary description for iPhoneHelper/// </summary>/// public class iPhoneHelper{ public iPhoneHelper() {  //  // TODO: Add constructor logic here  // } // This code is stupid in retrospect. Use css to solve this problem      public static string DisplayContentOnMenu(string val, iPhoneDevice ipd)    {        string Return = val;        string Elipsis = "...";        int iPadMaxLength = 30;        int iPhoneMaxLength = 15;        if (ipd == iPhoneDevice.iPad)        {            if (Return.Length > iPadMaxLength)            {                Return = Return.Substring(0, iPadMaxLength - Elipsis.Length) + Elipsis;            }        }        else        {            if (Return.Length > iPhoneMaxLength)            {                Return = Return.Substring(0, iPhoneMaxLength - Elipsis.Length) + Elipsis;            }        }        return (Return);    }}  Source code for the ViewStatePage: using System;using System.Data;using System.Data.SqlClient;using System.Configuration;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls; /// <summary>/// Summary description for BasePage/// </summary>#region Base class for a page.public class ViewStatePage : System.Web.UI.Page{     PageStatePersisterToDatabase myPageStatePersister;        public ViewStatePage()        : base()    {        myPageStatePersister = new PageStatePersisterToDatabase(this);    }     protected override PageStatePersister PageStatePersister    {        get        {            return myPageStatePersister;        }    } }#endregion #region This class will override the page persistence to store page state in a database.public class PageStatePersisterToDatabase : PageStatePersister{    private string ViewStateKeyField = "__VIEWSTATE_KEY";    private string _exNoConnectionStringFound = "No Database Configuration information is in the web.config.";     public PageStatePersisterToDatabase(Page page)        : base(page)    {    }     public override void Load()    {         // Get the cache key from the web form data        System.Int64 key = Convert.ToInt64(Page.Request.Params[ViewStateKeyField]);         Pair state = this.LoadState(key);         // Abort if cache object is not of type Pair        if (state == null)            throw new ApplicationException("Missing valid " + ViewStateKeyField);         // Set view state and control state        ViewState = state.First;        ControlState = state.Second;    }     public override void Save()    {         // No processing needed if no states available        if (ViewState == null && ControlState != null)            return;         System.Int64 key;        IStateFormatter formatter = this.StateFormatter;        Pair statePair = new Pair(ViewState, ControlState);         // Serialize the statePair object to a string.        string serializedState = formatter.Serialize(statePair);         // Save the ViewState and get a unique identifier back.        key = SaveState(serializedState);         // Register hidden field to store cache key in        // Page.ClientScript does not work properly with Atlas.        //Page.ClientScript.RegisterHiddenField(ViewStateKeyField, key.ToString());        ScriptManager.RegisterHiddenField(this.Page, ViewStateKeyField, key.ToString());    }     private System.Int64 SaveState(string PageState)    {        System.Int64 i64Key = 0;        string strConn = String.Empty,            strProvider = String.Empty;         string strSql = "insert into tblPageState ( SerializedState ) values ( '" + SqlEscape(PageState) + "');select scope_identity();";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);            sqlCn.Open();            i64Key = Convert.ToInt64(sqlCm.ExecuteScalar());            if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return i64Key;    }     private Pair LoadState(System.Int64 iKey)    {        string strConn = String.Empty,            strProvider = String.Empty,            SerializedState = String.Empty,            strMinutesInPast = GetMinutesInPastToDelete();        Pair PageState;        string strSql = "select SerializedState from tblPageState where tblPageStateID=" + iKey.ToString() + ";" +            "delete from tblPageState where DateUpdated<DateAdd(mi, " + strMinutesInPast + ", getdate());";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);             sqlCn.Open();            SerializedState = Convert.ToString(sqlCm.ExecuteScalar());            IStateFormatter formatter = this.StateFormatter;             if ((null == SerializedState) ||                (String.Empty == SerializedState))            {                throw (new ApplicationException("No ViewState records were returned."));            }             // Deserilize returns the Pair object that is serialized in            // the Save method.            PageState = (Pair)formatter.Deserialize(SerializedState);             if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return PageState;    }     private string SqlEscape(string Val)    {        string ReturnVal = String.Empty;        if (null != Val)        {            ReturnVal = Val.Replace("'", "''");        }        return (ReturnVal);    }    private void GetDBConnectionString(ref string ConnectionStringValue, ref string ProviderNameValue)    {        if (System.Configuration.ConfigurationManager.ConnectionStrings.Count > 0)        {            ConnectionStringValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString;            ProviderNameValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ProviderName;        }        else        {            throw new ConfigurationErrorsException(_exNoConnectionStringFound);        }    }    private string GetMinutesInPastToDelete()    {        string strReturn = "-60";        if (null != System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"])        {            strReturn = System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"].ToString();        }        return (strReturn);    }}#endregion AppleiPhone.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; /// <summary>/// Summary description for AppleIPhone/// </summary>public class AppleIPhone{ public AppleIPhone() {  //  // TODO: Add constructor logic here  // }     static public bool IsIPhoneOS()    {        return (IsIPad() || IsIPhone() || IsIPodTouch());    }     static public bool IsIPhone()    {        return IsTest("iPhone");    }     static public bool IsIPodTouch()    {        return IsTest("iPod");    }     static public bool IsIPad()    {        return IsTest("iPad");    }     static private bool IsTest(string Agent)    {        bool bl = false;        string ua = HttpContext.Current.Request.UserAgent.ToLower();        try        {            bl = ua.Contains(Agent.ToLower());        }        catch { }        return (bl);        }} Master page .cs: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls; public partial class MasterPages_iPhoneMaster : System.Web.UI.MasterPage{    protected void Page_Load(object sender, EventArgs e)    {            HtmlHead head = Page.Header;            HtmlMeta meta = new HtmlMeta();            if (AppleIPhone.IsIPad() == true)            {                meta.Content = "width=400,user-scalable=no";                head.Controls.Add(meta);             }            else            {                meta.Content = "width=device-width, user-scalable=no";                meta.Attributes.Add("name", "viewport");            }            meta.Attributes.Add("name", "viewport");            head.Controls.Add(meta);            HtmlLink cssLink = new HtmlLink();            HtmlGenericControl script = new HtmlGenericControl("script");            script.Attributes.Add("type", "text/javascript");            script.Attributes.Add("src", ResolveUrl("~/Scripts/iWebKit/javascript/functions.js"));            head.Controls.Add(script);            cssLink.Attributes.Add("rel", "stylesheet");            cssLink.Attributes.Add("href", ResolveUrl("~/Scripts/iWebKit/css/style.css") );            cssLink.Attributes.Add("type", "text/css");            head.Controls.Add(cssLink);            HtmlGenericControl jsLink = new HtmlGenericControl("script");            //jsLink.Attributes.Add("type", "text/javascript");            //jsLink.Attributes.Add("src", ResolveUrl("~/Scripts/jquery-1.4.1.min.js") );            //head.Controls.Add(jsLink);            HtmlLink appleIcon = new HtmlLink();            appleIcon.Attributes.Add("rel", "apple-touch-icon");            appleIcon.Attributes.Add("href", ResolveUrl("~/apple-touch-icon.png"));            HtmlMeta appleMobileWebAppStatusBarStyle = new HtmlMeta();            appleMobileWebAppStatusBarStyle.Attributes.Add("name", "apple-mobile-web-app-status-bar-style");            appleMobileWebAppStatusBarStyle.Attributes.Add("content", "black");            head.Controls.Add(appleMobileWebAppStatusBarStyle);    }     internal string FindPath(string Location)    {        string Url = Server.MapPath(Location);        return (Url);    }}

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • New MySQL Cluster 7.3 Previews: Foreign Keys, NoSQL Node.js API and Auto-Tuned Clusters

    - by Mat Keep
    At this weeks MySQL Connect conference, Oracle previewed an exciting new wave of developments for MySQL Cluster, further extending its simplicity and flexibility by expanding the range of use-cases, adding new NoSQL options, and automating configuration. What’s new: Development Release 1: MySQL Cluster 7.3 with Foreign Keys Early Access “Labs” Preview: MySQL Cluster NoSQL API for Node.js Early Access “Labs” Preview: MySQL Cluster GUI-Based Auto-Installer In this blog, I'll introduce you to the features being previewed. Review the blogs listed below for more detail on each of the specific features discussed. Save the date!: A live webinar is scheduled for Thursday 25th October at 0900 Pacific Time / 1600UTC where we will discuss each of these enhancements in more detail. Registration will be open soon and published to the MySQL webinars page MySQL Cluster 7.3: Development Release 1 The first MySQL Cluster 7.3 Development Milestone Release (DMR) previews Foreign Keys, bringing powerful new functionality to MySQL Cluster while eliminating development complexity. Foreign Key support has been one of the most requested enhancements to MySQL Cluster – enabling users to simplify their data models and application logic – while extending the range of use-cases for both custom projects requiring referential integrity and packaged applications, such as eCommerce, CRM, CMS, etc. Implementation The Foreign Key functionality is implemented directly within the MySQL Cluster data nodes, allowing any client API accessing the cluster to benefit from them – whether they are SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA, HTTP/REST or the new Node.js API - discussed later.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation.  This represents a further enhancement to MySQL Cluster's support for on0line schema changes, ie adding and dropping indexes, adding columns, etc.  Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  Getting Started with MySQL Cluster 7.3 DMR1: Users can download either the source or binary and evaluate the MySQL Cluster 7.3 DMR with Foreign Keys now! (Select the Development Release tab). MySQL Cluster NoSQL API for Node.js Node.js is hot! In a little over 3 years, it has become one of the most popular environments for developing next generation web, cloud, mobile and social applications. Bringing JavaScript from the browser to the server, the design goal of Node.js is to build new real-time applications supporting millions of client connections, serviced by a single CPU core. Making it simple to further extend the flexibility and power of Node.js to the database layer, we are previewing the Node.js Javascript API for MySQL Cluster as an Early Access release, available for download now from http://labs.mysql.com/. Select the following build: MySQL-Cluster-NoSQL-Connector-for-Node-js Alternatively, you can clone the project at the MySQL GitHub page.  Implemented as a module for the V8 engine, the new API provides Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. Figure 1: MySQL Cluster NoSQL API for Node.js enables end-to-end JavaScript development Rather than just presenting a simple interface to the database, the Node.js module integrates the MySQL Cluster native API library directly within the web application itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The new Node.js API joins a rich array of NoSQL interfaces available for MySQL Cluster. Whichever API is chosen for an application, SQL and NoSQL can be used concurrently across the same data set, providing the ultimate in developer flexibility.  Get started with MySQL Cluster NoSQL API for Node.js tutorial MySQL Cluster GUI-Based Auto-Installer Compatible with both MySQL Cluster 7.2 and 7.3, the Auto-Installer makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments – whether on-premise or in the cloud. Implemented with a standard HTML GUI and Python-based web server back-end, the Auto-Installer intelligently configures MySQL Cluster based on application requirements and auto-discovered hardware resources Figure 2: Automated Tuning and Configuration of MySQL Cluster Developed by the same engineering team responsible for the MySQL Cluster database, the installer provides standardized configurations that make it simple, quick and easy to build stable and high performance clustered environments. The auto-installer is previewed as an Early Access release, available for download now from http://labs.mysql.com/, by selecting the MySQL-Cluster-Auto-Installer build. You can read more about getting started with the MySQL Cluster auto-installer here. Watch the YouTube video for a demonstration of using the MySQL Cluster auto-installer Getting Started with MySQL Cluster If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3 and the Early Access previews). Or use the new MySQL Cluster Auto-Installer! Download the Guide to Scaling Web Databases with MySQL Cluster (to learn more about its architecture, design and ideal use-cases). Post any questions to the MySQL Cluster forum where our Engineering team and the MySQL Cluster community will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section here or in the blogs referenced in this article. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. The first Development Release of MySQL Cluster 7.3 and the Early Access previews give you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases, by using the comments of this blog.

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • HSDPA modem only working on certain USB ports

    - by nabulke
    Depending on which USB port I use to connect my HSDPA modem, the network manager will connect to the internet or not. I used to work (i.e. established a internet connection automatically) on all ports, but over time it simply stopped on some ports. lsusb output in all cases looks like that (Device ID varies depending on USB port): Bus 001 Device 009: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Any ideas what could cause this behaviour? What can I do to fix this? ADDED One additional information about the modem: if connected via USB it will be available as as harddrive AND as a HSDPA modem (kind of a duality...). In the error case, it will only be shown as a harddrive. ADDITIONAL INFO AS REQUESTED MODEM NOT WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 007: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 005: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 001 Device 004: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Bus 001 Device 003: ID 413c:0058 Dell Computer Corp. Port Replicator Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub laptop:~$ dmesg | grep 'usb' [ 0.225371] usbcore: registered new interface driver usbfs [ 0.225387] usbcore: registered new interface driver hub [ 0.225418] usbcore: registered new device driver usb [ 0.504291] usb usb1: configuration #1 chosen from 1 choice [ 0.504767] usb usb2: configuration #1 chosen from 1 choice [ 0.505046] usb usb3: configuration #1 chosen from 1 choice [ 0.505601] usb usb4: configuration #1 chosen from 1 choice [ 1.061064] usb 1-6: new high speed USB device using ehci_hcd and address 3 [ 1.192636] usb 1-6: configuration #1 chosen from 1 choice [ 1.447006] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.634908] usb 2-2: configuration #1 chosen from 1 choice [ 1.708164] usb 1-6.1: new high speed USB device using ehci_hcd and address 4 [ 1.801668] usb 1-6.1: configuration #1 chosen from 1 choice [ 2.076279] usb 1-6.1.1: new low speed USB device using ehci_hcd and address 5 [ 2.174932] usb 1-6.1.1: configuration #1 chosen from 1 choice [ 6.580315] usb 1-6.1.2: new high speed USB device using ehci_hcd and address6 [ 6.683479] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 20.018671] usbcore: registered new interface driver btusb [ 20.131703] usbcore: registered new interface driver usb-storage [ 20.131988] usb-storage: device found at 6 [ 20.131991] usb-storage: waiting for device to settle before scanning [ 20.207981] usb 1-6.1.2: USB disconnect, address 6 [ 20.291499] usbcore: registered new interface driver hiddev [ 20.297052] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6.1/1-6.1.1/1-6.1.1:1.0/input/input6 [ 20.297465] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.7-6.1.1/input0 [ 20.297534] usbcore: registered new interface driver usbhid [ 20.297803] usbhid: v2.6:USB HID core driver [ 26.552360] usb 1-6.1.2: new high speed USB device using ehci_hcd and address 7 [ 26.663506] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 26.709628] usb-storage: device found at 7 [ 26.709631] usb-storage: waiting for device to settle before scanning [ 26.732387] usb-storage: device found at 7 [ 26.732390] usb-storage: waiting for device to settle before scanning [ 31.709568] usb-storage: device scan complete [ 31.733676] usb-storage: device scan complete MODEM WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg | grep 'usb' [ 0.134811] usbcore: registered new interface driver usbfs [ 0.134826] usbcore: registered new interface driver hub [ 0.134858] usbcore: registered new device driver usb [ 0.360327] usb usb1: configuration #1 chosen from 1 choice [ 0.360783] usb usb2: configuration #1 chosen from 1 choice [ 0.361061] usb usb3: configuration #1 chosen from 1 choice [ 0.361611] usb usb4: configuration #1 chosen from 1 choice [ 1.144122] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.346896] usb 2-2: configuration #1 chosen from 1 choice [ 1.588072] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 1.761204] usb 3-1: configuration #1 chosen from 1 choice [ 5.972042] usb 1-1: new high speed USB device using ehci_hcd and address 4 [ 6.115438] usb 1-1: configuration #1 chosen from 1 choice [ 19.990565] usbcore: registered new interface driver usbserial [ 19.991429] usb-storage: device found at 4 [ 19.991432] usb-storage: waiting for device to settle before scanning [ 20.017260] usbcore: registered new interface driver usb-storage [ 20.017305] usbcore: registered new interface driver usbserial_generic [ 20.017308] usbserial: USB Serial Driver core [ 20.017817] usb-storage: device found at 4 [ 20.017820] usb-storage: waiting for device to settle before scanning [ 20.070796] usbcore: registered new interface driver btusb [ 20.229525] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB0 [ 20.229776] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB1 [ 20.229843] usbcore: registered new interface driver option [ 20.230396] usbcore: registered new interface driver hiddev [ 20.246280] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input6 [ 20.246438] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.1-1/input0 [ 20.246479] usbcore: registered new interface driver usbhid [ 20.246483] usbhid: v2.6:USB HID core driver [ 25.436579] usb-storage: device scan complete [ 25.437674] usb-storage: device scan complete

    Read the article

  • ASP.NET MVC HandleError Attribute

    - by Ben Griswold
    Last Wednesday, I took a whopping 15 minutes out of my day and added ELMAH (Error Logging Modules and Handlers) to my ASP.NET MVC application.  If you haven’t heard the news (I hadn’t until recently), ELMAH does a killer job of logging and reporting nearly all unhandled exceptions.  As for handled exceptions, I’ve been using NLog but since I was already playing with the ELMAH bits I thought I’d see if I couldn’t replace it. Atif Aziz provided a quick solution in his answer to a Stack Overflow question.  I’ll let you consult his answer to see how one can subclass the HandleErrorAttribute and override the OnException method in order to get the bits working.  I pretty much took rolled the recommended logic into my application and it worked like a charm.  Along the way, I did uncover a few HandleError fact to which I wasn’t already privy.  Most of my learning came from Steven Sanderson’s book, Pro ASP.NET MVC Framework.  I’ve flipped through a bunch of the book and spent time on specific sections.  It’s a really good read if you’re looking to pick up an ASP.NET MVC reference. Anyway, my notes are found a comments in the following code snippet.  I hope my notes clarify a few things for you too. public class LogAndHandleErrorAttribute : HandleErrorAttribute {     public override void OnException(ExceptionContext context)     {         // A word from our sponsors:         //      http://stackoverflow.com/questions/766610/how-to-get-elmah-to-work-with-asp-net-mvc-handleerror-attribute         //      and Pro ASP.NET MVC Framework by Steven Sanderson         //         // Invoke the base implementation first. This should mark context.ExceptionHandled = true         // which stops ASP.NET from producing a "yellow screen of death." This also sets the         // Http StatusCode to 500 (internal server error.)         //         // Assuming Custom Errors aren't off, the base implementation will trigger the application         // to ultimately render the "Error" view from one of the following locations:         //         //      1. ~/Views/Controller/Error.aspx         //      2. ~/Views/Controller/Error.ascx         //      3. ~/Views/Shared/Error.aspx         //      4. ~/Views/Shared/Error.ascx         //         // "Error" is the default view, however, a specific view may be provided as an Attribute property.         // A notable point is the Custom Errors defaultRedirect is not considered in the redirection plan.         base.OnException(context);           var e = context.Exception;                  // If the exception is unhandled, simply return and let Elmah handle the unhandled exception.         // Otherwise, try to use error signaling which involves the fully configured pipeline like logging,         // mailing, filtering and what have you). Failing that, see if the error should be filtered.         // If not, the error simply logged the exception.         if (!context.ExceptionHandled                || RaiseErrorSignal(e)                   || IsFiltered(context))                  return;           LogException(e); // FYI. Simple Elmah logging doesn't handle mail notifications.     }

    Read the article

  • Taking advantage of Windows Azure CDN and Dynamic Pages in ASP.NET - Caching content from hosted services

    - by Shawn Cicoria
    With the updates to Windows Azure CDN announced this week [1] I wanted to help illustrate the capability with a working sample that will serve up dynamic content from an ASP.NET site hosted in a WebRole. First, to get a good overview of the capability you can read the Overview of the Windows Azure CDN [2] content on MSDN. When you setup the ability to cache content from a hosted service, the requirement is to provide a path to your role’s DNS endpoint that ends in the path “/cdn”.  Additionally, you then map CDN to that service. What WAZ CDN does, is allow you to then map that through the CDN to your host.  The CDN will then make a request to your host on your client’s behalf. The requirement is still that your client, and any Url’s that are to be serviced through the CDN and this capability have to use the CDN DNS name and not your host – no different than what CDN does for Blog storage. The following 2 URL’s are samples of how the client needs to issue the requests. Windows Azure hosted service URL: http: //myHostedService.cloudapp.net/cdn/music.aspx   - for regular “dynamic” content Windows Azure CDN URL: http: //<identifier>.vo.msecnd.net/music.aspx   - for CDN “cachable” content. The first URL path’s the request direct to your host into the Azure datacenter.  The 2nd URL paths the request through the CDN infrastructure, where CDN will make the determination to request the content on behalf of the client to the Azure datacenter and your host on the /cdn path. The big advantage here is you can apply logic to your content creation.  What’s important is emitting the CDN friendly headers that allow CDN to request and re-request only when you designate based upon it’s rules of “staleness” as described in the overview page. With IIS7.5 there is an underlying issue when the Managed Module “OutputCache” is enabled that in order to emit a good header for your content, you’ll need to remove, and in my sample, helps provide CDN friendly headers.  You get IIS 7.5 when running under OS Family “2” in your service configuration. By default, and when the OutputCache managed module is loaded, if you use the HttpResponse.CachePolicy to set the Http Headers for “max-age” when the HttpCacheability is “Public”, you will NOT get the “max-age” emitted as part of the “Cache-control:” header.  Instead, the OutputCache module will remove “max-age” and just emit “public”.  It works ok when Cacheability is set to “private”. To work around the issue and ensure your code as follows emits the full max-age along with the public option, you need to remove as follows: <system.webServer>   <modules runAllManagedModulesForAllRequests="true">     <remove name="OutputCache"/>   </modules> </system.webServer>   Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(rv));   In the attached solution, the way I approached it was to have a VirtualApplication under the root site that has it’s own web.config  - this VirtualApplication is the /cdn of the site and when deployed to Azure as a Web Role will surface as a distinct IIS Application – along with a separate AppDomain. The CDN Sample is a simple Web Forms site that the /default landing page contains 3 IFrames to host: 1. Content direct from the host @   http://xxxx.cloudapp.net/cdn 2. Content via the CDN @ http://azxxx.vo.msecnd.net  3. Simple list of recent requests – showing where the request came from.   When you run the sample the first time you hit the page, both the Host and the CDN will cause 2 initial requests to hit the host.  You won’t see the first requests in the list because of timing – but if you refresh, you’ll see that the list will show that you have 2 requests initially. 1. sourced direct from the Browser to the HOST 2. sourced via the CDN The picture above shows the call-outs of each of those requests – green rows showing requests coming direct to the HOST, yellow showing the CDN request.  The IP addresses of the green items are direct from the client, where the CDN is from the CDN data center. As you refresh the page (hit Ctrl+F5 to force a full refresh and avoid “304 – not changed”) you’ll see that the request to the HOST get’s processed direct; but the request to the CDN endpoint is serviced direct from the CDN and doesn’t incur any additional request back to the HOST. The following is the Headers from the CDN response (Status-Line) HTTP/1.1 200 OK Age 13 Cache-Control public, max-age=300 Connection keep-alive Content-Length 6212 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:14 GMT Expires Fri, 11 Mar 2011 20:52:01 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   The following are the Headers from the HOST response (Status-Line) HTTP/1.1 200 OK Cache-Control public, max-age=300 Content-Length 6189 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:15 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   You can see that with the CDN request, the countdown (age) starts for aging the content. The full sample is located here: CDNSampleSite.zip [1] http://blogs.msdn.com/b/windowsazure/archive/2011/03/09/now-available-updated-windows-azure-sdk-and-windows-azure-management-portal.aspx [2] http://msdn.microsoft.com/en-us/library/ff919703.aspx

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • SQL SERVER – SSMS: Backup and Restore Events Report

    - by Pinal Dave
    A DBA wears multiple hats and in fact does more than what an eye can see. One of the core task of a DBA is to take backups. This looks so trivial that most developers shrug this off as the only activity a DBA might be doing. I have huge respect for DBA’s all around the world because even if they seem cool with all the scripting, automation, maintenance works round the clock to keep the business working almost 365 days 24×7, their worth is knowing that one day when the systems / HDD crashes and you have an important delivery to make. So these backup tasks / maintenance jobs that have been done come handy and are no more trivial as they might seem to be as considered by many. So the important question like: “When was the last backup taken?”, “How much time did the last backup take?”, “What type of backup was taken last?” etc are tricky questions and this report lands answers to the same in a jiffy. So the SSMS report, we are talking can be used to find backups and restore operation done for the selected database. Whenever we perform any backup or restore operation, the information is stored in the msdb database. This report can utilize that information and provide information about the size, time taken and also the file location for those operations. Here is how this report can be launched.   Once we launch this report, we can see 4 major sections shown as listed below. Average Time Taken For Backup Operations Successful Backup Operations Backup Operation Errors Successful Restore Operations Let us look at each section next. Average Time Taken For Backup Operations Information shown in “Average Time Taken For Backup Operations” section is taken from a backupset table in the msdb database. Here is the query and the expanded version of that particular section USE msdb; SELECT (ROW_NUMBER() OVER (ORDER BY t1.TYPE))%2 AS l1 ,       1 AS l2 ,       1 AS l3 ,       t1.TYPE AS [type] ,       (AVG(DATEDIFF(ss,backup_start_date, backup_finish_date)))/60.0 AS AverageBackupDuration FROM backupset t1 INNER JOIN sys.databases t3 ON ( t1.database_name = t3.name) WHERE t3.name = N'AdventureWorks2014' GROUP BY t1.TYPE ORDER BY t1.TYPE On my small database the time taken for differential backup was less than a minute, hence the value of zero is displayed. This is an important piece of backup operation which might help you in planning maintenance windows. Successful Backup Operations Here is the expanded version of this section.   This information is derived from various backup tracking tables from msdb database.  Here is the simplified version of the query which can be used separately as well. SELECT * FROM sys.databases t1 INNER JOIN backupset t3 ON (t3.database_name = t1.name) LEFT OUTER JOIN backupmediaset t5 ON ( t3.media_set_id = t5.media_set_id) LEFT OUTER JOIN backupmediafamily t6 ON ( t6.media_set_id = t5.media_set_id) WHERE (t1.name = N'AdventureWorks2014') ORDER BY backup_start_date DESC,t3.backup_set_id,t6.physical_device_name; The report does some calculations to show the data in a more readable format. For example, the backup size is shown in KB, MB or GB. I have expanded first row by clicking on (+) on “Device type” column. That has shown me the path of the physical backup file. Personally looking at this section, the Backup Size, Device Type and Backup Name are critical and are worth a note. As mentioned in the previous section, this section also has the Duration embedded inside it. Backup Operation Errors This section of the report gets data from default trace. You might wonder how. One of the event which is tracked by default trace is “ErrorLog”. This means that whatever message is written to errorlog gets written to default trace file as well. Interestingly, whenever there is a backup failure, an error message is written to ERRORLOG and hence default trace. This section takes advantage of that and shows the information. We can read below message under this section, which confirms above logic. No backup operations errors occurred for (AdventureWorks2014) database in the recent past or default trace is not enabled. Successful Restore Operations This section may not be very useful in production server (do you perform a restore of database?) but might be useful in the development and log shipping secondary environment, where we might be interested to see restore operations for a particular database. Here is the expanded version of the section. To fill this section of the report, I have restored the same backups which were taken to populate earlier sections. Here is the simplified version of the query used to populate this output. USE msdb; SELECT * FROM restorehistory t1 LEFT OUTER JOIN restorefile t2 ON ( t1.restore_history_id = t2.restore_history_id) LEFT OUTER JOIN backupset t3 ON ( t1.backup_set_id = t3.backup_set_id) WHERE t1.destination_database_name = N'AdventureWorks2014' ORDER BY restore_date DESC,  t1.restore_history_id,t2.destination_phys_name Have you ever looked at the backup strategy of your key databases? Are they in sync and do we have scope for improvements? Then this is the report to analyze after a week or month of maintenance plans running in your database. Do chime in with what are the strategies you are using in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • ASP.NET, HTTP 404 and SEO

    - by paxer
    The other day our SEO Manager told me that he is not happy about the way ASP.NET application return HTTP response codes for Page Not Found (404) situation. I've started research and found interesting things, which could probably help others in similar situation.  1) By default ASP.NET application handle 404 error by using next web.config settings           <customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.html"/>           </customErrors> However this approach has a problem, and this is actually what our SEO manager was talking about. This is what HTTP return to request in case of Page not Found situation. So first of all it return HTTP 302 Redirect code and then HTTP 200 - ok code. The problem : We need to have HTTP 404 response code at the end of response for SEO purposes.  Solution 1 Let's change a bit our web.config settings to handle 404 error not on static html page but on .aspx page      <customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.aspx"/>           </customErrors> And now let's add in Page_Load event on 404.aspx page next lines     protected void Page_Load(object sender, EventArgs e)             {                 Response.StatusCode = 404;             } Now let's run our test again Now it has got better, last HTTP response code is 404, but my SEO manager still was not happy, becouse we still have 302 code before it, and as he said this is bad for Google search optimization. So we need to have only 404 HTTP code alone. Solution 2 Let's comment our web.config settings     <!--<customErrors defaultRedirect="GenericError.htm" mode="On">             <error statusCode="404" redirect="404.html"/>           </customErrors>--> Now, let's open our Global.asax file, or if it does not exist in your project - add it. Then we need to add next logic which will detect if server error code is 404 (Page not found) then handle it.       protected void Application_Error(object sender, EventArgs e)             {                            Exception ex = Server.GetLastError();                 if (ex is HttpException)                 {                     if (((HttpException)(ex)).GetHttpCode() == 404)                         Server.Transfer("~/404.html");                 }                 // Code that runs when an unhandled error occurs                 Server.Transfer("~/GenericError.htm");                  } Cool, now let's start our test again... Yehaa, looks like now we have only 404 HTTP response code, SEO manager and Google are happy and so do i:) Hope this helps!  

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC 3 Hosting :: ASP.NET MVC 3 First Look

    - by mbridge
    MVC 3 View Enhancements MVC 3 introduces two improvements to the MVC view engine: - Ability to select the view engine to use. MVC 3 allows you to select from any of your  installed view engines from Visual Studio by selecting Add > View (including the newly introduced ASP.NET “Razor” engine”): - Support for the next ASP.NET “Razor” syntax. The newly previewed Razor syntax is a concise lightweight syntax. MVC 3 Control Enhancements - Global Filters: ASP.NET MVC 3  allows you to specify that a filter which applies globally to all Controllers within an app by adding it to the GlobalFilters collection.  The RegisterGlobalFilters() method is now included in the default Global.asax class template and so provides a convenient place to do this since is will then be called by the Application_Start() method: void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleLoggingAttribute()); filters.Add(new HandleErrorAttribute()); } void Application_Start() { RegisterGlobalFilters (GlobalFilters.Filters); } - Dynamic ViewModel Property : MVC 3 augments the ViewData API with a new “ViewModel” property on Controller which is of type “dynamic” – and therefore enables you to use the new dynamic language support in C# and VB pass ViewData items using a cleaner syntax than the current dictionary API. Public ActionResult Index() { ViewModel.Message = "Hello World"; return View(); } - New ActionResult Types : MVC 3 includes three new ActionResult types and helper methods: 1. HttpNotFoundResult – indicates that a resource which was requested by the current URL was not found. HttpNotFoundResult will return a 404 HTTP status code to the calling client. 2. PermanentRedirects – The HttpRedirectResult class contains a new Boolean “Permanent” property which is used to indicate that a permanent redirect should be done. Permanent redirects use a HTTP 301 status code.  The Controller class  includes three new methods for performing these permanent redirects: RedirectPermanent(), RedirectToRoutePermanent(), andRedirectToActionPermanent(). All  of these methods will return an instance of the HttpRedirectResult object with the Permanent property set to true. 3. HttpStatusCodeResult – used for setting an explicit response status code and its associated description. MVC 3 AJAX and JavaScript Enhancements MVC 3 ships with built-in JSON binding support which enables action methods to receive JSON-encoded data and then model-bind it to action method parameters. For example a jQuery client-side JavaScript could define a “save” event handler which will be invoked when the save button is clicked on the client. The code in the event handler then constructs a client-side JavaScript “product” object with 3 fields with their values retrieved from HTML input elements. Finally, it uses jQuery’s .ajax() method to POST a JSON based request which contains the product to a /theStore/UpdateProduct URL on the server: $('#save').click(function () { var product = { ProdName: $('#Name').val() Price: $('#Price').val(), } $.ajax({ url: '/theStore/UpdateProduct', type: "POST"; data: JSON.stringify(widget), datatype: "json", contentType: "application/json; charset=utf-8", success: function () { $('#message').html('Saved').fadeIn(), }, error: function () { $('#message').html('Error').fadeIn(), } }); return false; }); MVC will allow you to implement the /theStore/UpdateProduct URL on the server by using an action method as below. The UpdateProduct() action method will accept a strongly-typed Product object for a parameter. MVC 3 can now automatically bind an incoming JSON post value to the .NET Product type on the server without having to write any custom binding. [HttpPost] public ActionResult UpdateProduct(Product product) { // save logic here return null } MVC 3 Model Validation Enhancements MVC 3 builds on the MVC 2 model validation improvements by adding   support for several of the new validation features within the System.ComponentModel.DataAnnotations namespace in .NET 4.0: - Support for the new DataAnnotations metadata attributes like DisplayAttribute. - Support for the improvements made to the ValidationAttribute class which now supports a new IsValid overload that provides more info on  the current validation context, like what object is being validated. - Support for the new IValidatableObject interface which enables you to perform model-level validation and also provide validation error messages which are specific to the state of the overall model. MVC 3 Dependency Injection Enhancements MVC 3 includes better support for applying Dependency Injection (DI) and also integrating with Dependency Injection/IOC containers. Currently MVC 3 Preview 1 has support for DI in the below places: - Controllers (registering & injecting controller factories and injecting controllers) - Views (registering & injecting view engines, also for injecting dependencies into view pages) - Action Filters (locating and  injecting filters) And this is another important blog about Microsoft .NET and technology: - Windows 2008 Blog - SharePoint 2010 Blog - .NET 4 Blog And you can visit here if you're looking for ASP.NET MVC 3 hosting

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • .NET vs Windows 8

    - by Simon Cooper
    So, day 1 of DevWeek. Lots and lots of Windows 8 and WinRT, as you would expect. The keynote had some actual content in it, fleshed out some of the details of how your apps linked into the Metro infrastructure, and confirmed that there would indeed be an enterprise version of the app store available for Metro apps.) However, that's, not what I want to focus this post on. What I do want to focus on is this: Windows 8 does not make .NET developers obsolete. Phew! .NET in the New Ecosystem In all the hype around Windows 8 the past few months, a lot of developers have got the impression that .NET has been sidelined in Windows 8; C++ and COM is back in vogue, and HTML5 + JavaScript is the New Way of writing applications. You know .NET? It's yesterday's tech. Enter the 21st Century and write <div>! However, after speaking to people at the conference, and after a couple of talks by Dave Wheeler on the innards of WinRT and how .NET interacts with it, my views on the coming operating system have changed somewhat. To summarize what I've picked up, in no particular order (none of this is official, just my sense of what's been said by various people): Metro apps do not replace desktop apps. That is, Windows 8 fully supports .NET desktop applications written for every other previous version of Windows, and will continue to do so in the forseeable future. There are some apps that simply do not fit into Metro. They do not fit into the touch-based paradigm, and never will. Traditional desktop support is not going away anytime soon. The reason Silverlight has been hidden in all the Metro hype is that Metro is essentially based on Silverlight design principles. Silverlight developers will have a much easier time writing Metro apps than desktop developers, as they would already be used to all the principles of sandboxing and separation introduced with Silverlight. It's desktop developers who are going to have to adapt how they work. .NET + XAML is equal to HTML5 + JS in importance. Although the underlying WinRT system is built on C++ & COM, most application development will be done either using .NET or HTML5. Both systems have their own wrapper around the underlying WinRT infrastructure, hiding the implementation details. The CLR is unchanged; it's still the .NET 4 CLR, running IL in .NET assemblies. The thing that changes between desktop and Metro is the class libraries, which have more in common with the Silverlight libraries than the desktop libraries. In Metro, although all the types look and behave the same to callers, some of the core BCL types are now wrappers around their WinRT equivalents. These wrappers are then enhanced using standard .NET types and code to produce the Metro .NET class libraries. You can't simply port a desktop app into Metro. The underlying file IO, network, timing and database access is either completely different or simply missing. Similarly, although the UI is programmed using XAML, the behaviour of the Metro XAML is different to WPF or Silverlight XAML. Furthermore, the new design principles and touch-based interface for Metro applications demand a completely new UI. You will be able to re-use sections of your app encapsulating pure program logic, but everything else will need to be written from scratch. Microsoft has taken the opportunity to remove a whole raft of types and methods from the Metro framework that are obsolete (non-generic collections) or break the sandbox (synchronous APIs); if you use these, you will have to rewrite to use the alternatives, if they exist at all, to move your apps to Metro. If you want to write public WinRT components in .NET, there are some quite strict rules you have to adhere to. But the compilers know about these rules; you can write them in C# or VB, and the compilers will tell you when you do something that isn't allowed and deal with the translation to WinRT metadata rather than .NET assemblies. It is possible to write a class library that can be used in Metro and desktop applications. However, you need to be very careful not to use types that are available in one but not the other. One can imagine developers writing their own abstraction around file IO and UIs (MVVM anyone?) that can be implemented differently in Metro and desktop, but look the same within your shared library. So, if you're a .NET developer, you have a lot less to worry about. .NET is a viable platform on Metro, and traditional desktop apps are not going away. You don't have to learn HTML5 and JavaScript if you don't want to. Hurray!

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >