Search Results

Search found 6058 results on 243 pages for 'short film'.

Page 42/243 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Google+ Platform Office Hours for May 16th, 2012: Hangouts API v1.1

    Google+ Platform Office Hours for May 16th, 2012: Hangouts API v1.1 This week we discussed the latest release of the Hangouts API, v1.1. JD Salazar and Richard Dunn from the Hangouts API engineering team joined us to help your answer questions. Discussion this session on Google+: goo.gl You can learn more about our office hours here: goo.gl 0:29 - Introductions 2:50 - Richard gives us an overview of what's new in Hangouts API v1.1 8:57 - What are the default scales for the static overlays? 9:25 - Will the static overlay scale ratio change during the hangout? 10:13 - What is the resolution of the feed? How do I ensure my overlays match the quality? 12:49 - How do I know if an image resource has failed to load? 16:33 - Can we have animated gifs as overlays? 19:44 - Loaded overlays do not clear upon deletion. How many can I load before I encounter issues? 21:48 - Are sound overlays played to all participants or only locally? What about sound cancellation? 23:27 - How do you uninstall a Hangout app? 25:41 - Can I make an app that uses drag and drop onto the film strip? 26:55 - Can we embed participant thumbnails elsewhere on the screen? 28:33 - How can I determine a consistent ordering for hangout participants? 31:35 - Can I access Picasa photos uploaded by another user within a hangout? Gerwin demonstrates his solution. 31:14 - How do I know when my hangout app has been unloaded for the purposes of doing cleanup? 39:28 - Will face tracking ever support multiple faces? 40:41 - Can I use WebGL in a hangouts app? 42:09 - I'm having issues with <b>...</b> From: GoogleDevelopers Views: 2032 18 ratings Time: 53:05 More in Science & Technology

    Read the article

  • The Most Ridiculous Computer Cameos of All Time

    - by Jason Fitzpatrick
    For the last half century computers have played all sorts of major and minor roles in movies; check out this collection to see some of the more quirky and out-of-place appearances. Wired magazine rounds up some of the more oddball appearances of computers in film. Like, for example, the scene shown above from Soylent Green: Spoiler alert: Soylent Green is people! But that’s not the only thing we’re gonna spoil. Soylent Green is set in 2022, and at one point, you’ll notice that a government facility is still using a remote calculator that plugs into the CDC 6600, a machine that was state-of-the-art in 1971. Come to think of it, we should scratch this from the list. This is pretty close to completely accurate. Hit up the link below to check out the full gallery, including a really interesting bit about how the U.S. Government’s largest computer project–once decommissioned and sold as surplus–ended up on the sets of dozens of movies and television shows. The Most Wonderfully Ridiculous Movie Computers of All Time [Wired] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • Would anyone tell me how to fetch the media:thumb element's attribute from a json feed?

    - by ash
    I made a yahoo pipe that pulls up the atoms as json format; however, I can fetch and display all the elements in my html page except for the element's attribute. Would anyone tell me how to fetch the media:thumb element's attribute from a json feed? I am pasting the html page's code with javascript. If you save the html page and then view it in browser, you will see that all the necessary elements get output at html page except for the media:thumb as I cannot display the attribute of media:thumb when the feed is formatted as json. I am also pasting the some portion of the json feed so that you can have an idea what i am talking about. Please tell me how to retrieve attribute from media:thumb element of a json feed by using plain javascript but no server side code or javascript library. Thank you. function getFeed(feed){ var newScript = document.createElement('script'); newScript.type = 'text/javascript'; newScript.src = 'http://pipes.yahoo.com/pipes/pipe.run?_id=40616620df99780bceb3fe923cecd216&_render=json&_callback=piper'; document.getElementsByTagName("head")[0].appendChild(newScript); } function piper(feed){ var tmp=''; for (var i=0; i'; tmp+=feed.value.items[i].title+''; tmp+=feed.value.items[i].author.name+''; tmp+=feed.value.items[i].published+''; if (feed.value.items[i].description) { tmp+=feed.value.items[i].description+''; } tmp+='<hr>'; } document.getElementById('rssLayer').innerHTML=tmp; } </script> bchnbc .............................................................. Some portion of the json feed that gets generated by yahoo pipe .............................................................. piper({"count":2,"value":{"title":"myPipe","description":"Pipes Output","link":"http:\/\/pipes.yahoo.com\/pipes\/pipe.info?_id=f7f4175d493cf1171aecbd3268fea5ee","pubDate":"Fri, 02 Apr 2010 17:59:22 -0700","generator":"http:\/\/pipes.yahoo.com\/pipes\/","callback":"piper", "items": [{ "rights":"Attribution - Noncommercial - No Derivative Works", "link":"http:\/\/vodo.net\/mixtape1", "y:id":{"value":null,"permalink":"true"}, "content":{"content":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.","type":"text"}, "author": {"name":"Various"}, "description":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.", "media:thumbnail": { "url":"http:\/\/vodo.net\/\/thumbnails\/Mixtape1.jpg" }, "published":"2010-03-08-09:20:20 PM", "format": { "audio_bitrate":null, "width":"608", "xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat", "channels":"2", "samplerate":"44100.0", "duration":"3092.36", "height":"352", "size":"733925376.0", "framerate":"25.0", "audio_codec":"mp3", "video_bitrate":"1898.0", "video_codec":"XVID", "pixel_aspect_ratio":"16:9" }, "y:title":"Mixtape #1: VODO's favourite short films", "title":"Mixtape #1: VODO's favourite short films", "id":null, "pubDate":"2010-03-08-09:20:20 PM", "y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }}, {"rights":"Attribution - Noncommercial - No Derivative Works","link":"http:\/\/vodo.net\/gilbert","y:id":{"value":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","permalink":"true"},"content":{"content":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","type":"text"},"author":{"name":"Nathaniel Hansen"},"description":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","media:thumbnail":{"url":"http:\/\/vodo.net\/\/thumbnails\/gilbert.jpeg"},"published":"2010-03-03-10:37:05 AM","format":{"audio_bitrate":null,"width":"624","xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat","channels":"2","samplerate":null,"duration":"373.673","height":"352","size":"123321266.0","framerate":null,"audio_codec":"mp3","video_bitrate":null,"video_codec":"XVID","pixel_aspect_ratio":"16:9"},"y:title":"Gilbert","title":"Gilbert","id":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","pubDate":"2010-03-03-10:37:05 AM","y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }} ] }})

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Error in python - don't understand

    - by Jasper
    Hi, I'm creating a game, and am quite new to Python generally. I created a function 'descriptionGenerator()' which generates a description for characters and objects either randomly or using variables passed to it. It seemed to be working, but every now and then it wouldn't work correctly. So i placed it in a loop, and it never seems to be able to complete the loop without one of the iterations having this problem. The code is as follows: #+------------------------------------------+ #| Name: bitsandpieces.py | #| A module for the 'Europa I' game | #| created for the Game Making Competition | #| | #| Date Created/Modified: | #| 3/4/10 | 3/4/10 | #+------------------------------------------+ # Import the required modules # Import system modules: import time import random # Import 3rd party modules: # Import game modules: # Define the 'descriptionGenerator()' function def descriptionGenerator(descriptionVariables): descriptionVariableSize = len(descriptionVariables) if descriptionVariables[0] == 'char': # If there is only one variable ('char'), create a random description if descriptionVariableSize == 1: # Define choices for descriptionVariables to be generated from gender_choices = ['male', 'female'] hair_choices = ['black', 'red', 'blonde', 'grey', 'brown', 'blue'] hair_choices2 = ['long', 'short', 'cropped', 'curly'] size_choices = ['tubby', 'thin', 'fat', 'almost twig-like'] demeanour_choices = ['glowering', 'bright', 'smiling', 'sombre', 'intelligent'] impression_choices = ['likeable', 'unlikeable', 'dangerous', 'annoying', 'afraid'] # Define description variables gender = random.choice(gender_choices) height = str(float('0.' + str(random.randint(1, 9))) + float(random.randint(1, 2))) if float(height) > 1.8: height_string = 'tall' if float(height) > 2: height_string = 'very tall' elif float(height) < 1.8 and float(height) > 1.5: height_string = 'average' elif float(height) < 1.5: height_string = 'short' if float(height) < 1.3: height_string = 'very short' hair = random.choice(hair_choices2) + ' ' + random.choice(hair_choices) size = random.choice(size_choices) demeanour = random.choice(demeanour_choices) impression = random.choice(impression_choices) # Collect description variables in list 'randomDescriptionVariables' randomDescriptionVariables = ['char', gender, height, height_string, hair, size, demeanour, impression] # Generate description using the 'descriptionGenerator' function descriptionGenerator(randomDescriptionVariables) # Generate the description of a character using the variables passed to the function elif descriptionVariableSize == 8: if descriptionVariables[1] == 'male': if descriptionVariables[7] != 'afraid': print """A %s man, about %s m tall. He has %s hair and is %s. He is %s and you get the impression that he is %s.""" %(descriptionVariables[3], descriptionVariables[2], descriptionVariables[4], descriptionVariables[5], descriptionVariables[6], descriptionVariables[7]) elif descriptionVariables[7] == 'afraid': print """A %s man, about %s m tall. He has %s hair and is %s. He is %s.\nYou feel that you should be %s of him.""" %(descriptionVariables[3], descriptionVariables[2], descriptionVariables[4], descriptionVariables[5], descriptionVariables[6], descriptionVariables[7]) elif descriptionVariables[1] == 'female': if descriptionVariables[7] != 'afraid': print """A %s woman, about %s m tall. She has %s hair and is %s. She is %s and you get the impression that she is %s.""" %(descriptionVariables[3], descriptionVariables[2], descriptionVariables[4], descriptionVariables[5], descriptionVariables[6], descriptionVariables[7]) elif descriptionVariables[7] == 'afraid': print """A %s woman, about %s m tall. She has %s hair and is %s. She is %s.\nYou feel that you should be %s of her.""" %(descriptionVariables[3], descriptionVariables[2], descriptionVariables[4], descriptionVariables[5], descriptionVariables[6], descriptionVariables[7]) else: pass elif descriptionVariables[0] == 'obj': # Insert code here 2 deal with object stuff pass print print myDescriptionVariables = ['char'] i = 0 while i < 30: print print print descriptionGenerator(myDescriptionVariables) i = i + 1 time.sleep(10) When it fails to properly execute it says this: Traceback (most recent call last): File "/Users/Jasper/Development/Programming/MyProjects/Game Making Challenge/Europa I/Code/Code 2.0/bitsandpieces.py", line 79, in <module> descriptionGenerator(myDescriptionVariables) File "/Users/Jasper/Development/Programming/MyProjects/Game Making Challenge/Europa I/Code/Code 2.0/bitsandpieces.py", line 50, in descriptionGenerator randomDescriptionVariables = ['char', gender, height, height_string, hair, size, demeanour, impression] UnboundLocalError: local variable 'height_string' referenced before assignment Thanks for any help with this

    Read the article

  • LibreOffice UNO Java API: how to open a document, execute a macro and close it?

    - by MarcoS
    I'm working on LibreOffice server-side: on the server I run soffice --accept=... Then I use Java LibreOffice client API's to apply a macro on a document (calc or writer). The java execution does not give any error, but I do not get the job done (macro code is executed, but it's effects are not in the output file). More, after macro script is invoked, the Basic debugger window appears, apparently stopped on the first line of my macro; F5 does not restart it... This is the relevant code I'm using: try { XComponentContext xLocalContext = Bootstrap.createInitialComponentContext(null); System.out.println("xLocalContext"); XMultiComponentFactory xLocalServiceManager = xLocalContext.getServiceManager(); System.out.println("xLocalServiceManager"); Object urlResolver = xLocalServiceManager.createInstanceWithContext( "com.sun.star.bridge.UnoUrlResolver", xLocalContext); System.out.println("urlResolver"); XUnoUrlResolver xUrlResolver = (XUnoUrlResolver) UnoRuntime.queryInterface(XUnoUrlResolver.class, urlResolver); System.out.println("xUrlResolve"); try { String uno = "uno:" + unoMode + ",host=" + unoHost + ",port=" + unoPort + ";" + unoProtocol + ";" + unoObjectName; Object rInitialObject = xUrlResolver.resolve(uno); System.out.println("rInitialObject"); if (null != rInitialObject) { XMultiComponentFactory xOfficeFactory = (XMultiComponentFactory) UnoRuntime.queryInterface( XMultiComponentFactory.class, rInitialObject); System.out.println("xOfficeFactory"); Object desktop = xOfficeFactory.createInstanceWithContext("com.sun.star.frame.Desktop", xLocalContext); System.out.println("desktop"); XComponentLoader xComponentLoader = (XComponentLoader)UnoRuntime.queryInterface( XComponentLoader.class, desktop); System.out.println("xComponentLoader"); PropertyValue[] loadProps = new PropertyValue[3]; loadProps[0] = new PropertyValue(); loadProps[0].Name = "Hidden"; loadProps[0].Value = Boolean.TRUE; loadProps[1] = new PropertyValue(); loadProps[1].Name = "ReadOnly"; loadProps[1].Value = Boolean.FALSE; loadProps[2] = new PropertyValue(); loadProps[2].Name = "MacroExecutionMode"; loadProps[2].Value = new Short(com.sun.star.document.MacroExecMode.ALWAYS_EXECUTE_NO_WARN); try { XComponent xComponent = xComponentLoader.loadComponentFromURL("file:///" + inputFile, "_blank", 0, loadProps); System.out.println("xComponent from " + inputFile); String macroName = "Standard.Module1.MYMACRONAME?language=Basic&location=application"; Object[] aParams = null; XScriptProviderSupplier xScriptPS = (XScriptProviderSupplier) UnoRuntime.queryInterface(XScriptProviderSupplier.class, xComponent); XScriptProvider xScriptProvider = xScriptPS.getScriptProvider(); XScript xScript = xScriptProvider.getScript("vnd.sun.star.script:"+macroName); short[][] aOutParamIndex = new short[1][1]; Object[][] aOutParam = new Object[1][1]; @SuppressWarnings("unused") Object result = xScript.invoke(aParams, aOutParamIndex, aOutParam); System.out.println("xScript invoke macro" + macroName); XStorable xStore = (XStorable)UnoRuntime.queryInterface(XStorable.class, xComponent); System.out.println("xStore"); if (outputFileType.equalsIgnoreCase("pdf")) { System.out.println("writer_pdf_Export"); loadProps[0].Name = "FilterName"; loadProps[0].Value = "writer_pdf_Export"; } xStore.storeToURL("file:///" + outputFile, loadProps); System.out.println("storeToURL to file " + outputFile); xComponent.dispose(); xComponentLoader = null; rInitialObject = null; System.out.println("done."); System.exit(0); } catch(IllegalArgumentException e) { System.err.println("Error: Can't load component from url " + inputFile); } } else { System.err.println("Error: Unknown initial object name at server side"); } } catch(NoConnectException e) { System.err.println("Error: Server Connection refused: check server is listening..."); } } catch(java.lang.Exception e) { System.err.println("Error: Java exception:"); e.printStackTrace(); }

    Read the article

  • Jaxb unmarshalls fixml object but all fields are null

    - by DUFF
    I have a small XML document in the FIXML format. I'm unmarshalling them using jaxb. The problem The process complete without errors but the objects which are created are completely null. Every field is empty. The fields which are lists (like the Qty) have the right number of object in them. But the fields of those objects are also null. Setup I've downloaded the FIXML schema from here and I've created the classes with xjc and the maven plugin. They are all in the package org.fixprotocol.fixml_5_0_sp2. I've got the sample xml in a file FIXML.XML <?xml version="1.0" encoding="ISO-8859-1"?> <FIXML> <Batch> <PosRpt> <Pty ID="GS" R="22"/> <Pty ID="01" R="5"/> <Pty ID="6U8" R="28"> <Sub ID="2" Typ="21"/> </Pty> <Pty ID="GS" R="22"/> <Pty ID="6U2" R="2"/> <Instrmt ID="GHPKRW" SecTyp="FWD" MMY="20121018" MatDt="2012-10-18" Mult="1" Exch="GS" PxQteCcy="KJS" FnlSettlCcy="GBP" Fctr="0.192233298" SettlMeth="G" ValMeth="FWDC2" UOM="Ccy" UOMCCy="USD"> <Evnt EventTyp="121" Dt="2013-10-17"/> <Evnt EventTyp="13" Dt="2013-10-17"/> </Instrmt> <Qty Long="0.000" Short="22000000.000" Typ="PNTN"/> <Qty Long="0.000" Short="22000000.000" Typ="FIN"/> <Qty Typ="DLV" Long="0.00" Short="0.00" Net="0.0"/> <Amt Typ="FMTM" Amt="32.332" Ccy="USD"/> <Amt Typ="CASH" Amt="1" Rsn="3" Ccy="USD"/> <Amt Typ="IMTM" Amt="329.19" Ccy="USD"/> <Amt Typ="DLV" Amt="0.00" Ccy="USD"/> <Amt Typ="BANK" Amt="432.23" Ccy="USD"/> </PosRpt> Then I'm calling the unmarshaller with custom event handler which just throws an exception on a parse error. The parsing complete so I know there are no errors being generated. I'm also handling the namespace as suggested here // sort out the file String xmlFile = "C:\\FIXML.XML.xml"; System.out.println("Loading XML File..." + xmlFile); InputStream input = new FileInputStream(xmlFile); InputSource is = new InputSource(input); // create jaxb context JAXBContext jc = JAXBContext.newInstance("org.fixprotocol.fixml_5_0_sp2"); Unmarshaller unmarshaller = jc.createUnmarshaller(); // add event handler so jacB will fail on an error CustomEventHandler validationEventHandler = new CustomEventHandler(); unmarshaller.setEventHandler(validationEventHandler); // set the namespace NamespaceFilter inFilter = new NamespaceFilter("http://www.fixprotocol.org/FIXML-5-0-SP2", true); inFilter.setParent(SAXParserFactory.newInstance().newSAXParser().getXMLReader()); SAXSource source = new SAXSource(inFilter, is); // GO! JAXBElement<FIXML> fixml = unmarshaller.unmarshal(source, FIXML.class); The fixml object is created. In the above sample the Amt array will have five element which matches the number of amts in the file. But all the fields like ccy are null. I've put breakpoints in the classes created by xjc and none of the setters are ever called. So it appears that jaxb is unmarshalling and creating all the correct objects, but it's never calling the setters?? I'm completely stumped on this. I've seen a few posts that suggrest making sure the package.info file that was generated by xjc is in the packags and I've made sure that it's there. There are no working in the IDE about the generated code. Any help much appreciated.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • Clean Code Developer & Certification in IT - MSCC 21.09.2013

    It was a very busy weekend this time, and quite some hectic to organise the second meetup on a Saturday for the Mauritius Software Craftsmanship Community (MSCC) but it was absolutely fun. Following, I'm writing a brief summary about the topics we spoke about and the new impulses I got. "What a meetup... I was positively impressed. At the beginning I thought that noone would actually show up but then by the time the room got filled. Lots of conversation, great dialogues and fantastic networking between fresh students, experienced students, experienced employees, and self-employed attendees. That's what community is all about!" Above quote was my first reaction shortly after the gathering. And despite being busy during the weekend and yesterday, I took my time to reflect a little bit on things happened and statements made before writing it here on my blog. Additionally, I was also very curious about possible reactions and blogs from other attendees. Reactions from other craftsmen Let me quickly give you some links and quotes from others first... "Like Jochen posted on facebook, that was indeed a 5+ hours marathon (maybe 4 hours for me but still) … Wohoo! We’re indeed a bunch of crazy geeks who did not realise how time flew as we dived into the myriad discussions that sprouted. Yet in the end everyone was happy (:" -- Ish on MSCC meetup - The marathon (: "And the 4hours spent @ Talking drums bore its fruit..I was doing something I never did before....reading the borrowed book while walking....and though I was not that familiar with things mentionned in the book...I was skimming,scanning & flipping...reading titles...short paragraphs...and I skipped pages till I reached home." -- Yannick on Mauritius Software Craftsmanship 1st Meet-up "Hi Developers, Just wanted to share with you the meetups i attended last Saturday - [...] - The second meetup is the one hosted by Jochen Kirstätter, the MSCC, where the attendees were Craftsman, no woman, this time - all sharing the same passion of being a developer - even though it is on different platforms(Windows - Windows Phone - Linux - Adobe(yes a designer) - .Net) - but we manage to sit at the same table - sharing developer views and experience in the corporate world - also talking about good practice when coding( where Jochen initiated a discussion on Clean Coding ) i could not stay till the end - but from what i have heard - the longer you stay the more fun you have till 1600. Developers in the Facebook grouping i invite you to stay tuned about the various developer communities popping up - where you can come to share and learn good practices, develop the entrepreneurial spirit, and learn and share your passion about technologies" -- Arnaud on Facebook More feedback has been posted on the event directly. So, should I really write more? Wouldn't that spoil the impressions? Starting the day with a surprise Indeed, I was very pleased to stumble over the existence of Mobile Monday Mauritius on LinkedIn, an association about any kind of mobile app development, mobile gadgets and latest smartphones on the market. Despite the Monday in their name they had scheduled their recent meeting on Saturday between 10:00 and 12:00hrs. Wow, what a coincidence! Let's grap the bull by its horns and pay them an introductory visit. As they chose the Ebene Accelerator at the Orange Tower in Ebene it was a no-brainer to leave home a bit earlier and stop by. It was quite an experience and fun to talk to the geeks over there. Really looking forward to organise something together.... Arriving at the venue As the children got a bit uneasy at the MoMo gathering and I didn't want to disturb them too much, we arrived early at Bagatelle. Well, no problems as we went for a decent breakfast at Food Lover's Market. Shortly afterwards we went to our venue location, Talking Drums, and prepared the room for the meeting. We only had to take off a repro-painting of the wall in order to have a decent area for the projector. All went very smooth and my two little ones were of great help. Just in time, our first craftsman Avinash arrived on the spot. And then the waiting started... Luckily, not too long. Bit by bit more and more IT people came to join our meeting. Meanwhile, I used the time to give a brief introduction about the MSCC in general, what we are (hm, maybe I am) trying to achieve and that the recent phase is completely focused on creating more awareness that a community like the MSCC is active here in Mauritius. As soon as we reached some 'critical mass' of about ten people I asked everyone for a short introduction and bio, just in case... Conversation between participants started to kick in and we were actually more networking than having a focus on our topics of the day. Quick updates on latest news and development around the MSCC Finally, Clean Code Developer No matter how the position is actually called, whether it is Software Engineer, Software Developer, Programmer, Architect, or Craftsman, anyone working in IT is facing almost the same obstacles. As for the process of writing software applications there are re-occurring patterns and principles combined with some common exercise and best practices on how to resolve them. Initiated by the must-read book 'Clean Code' by Robert C. Martin (aka Uncle Bob) the concept of the Clean Code Developer (CCD) was born already some years ago. CCD is much likely to traditional martial arts where you create awareness of certain principles and learn how to apply practices to improve your style. The CCD initiative recommends to indicate your level of knowledge and experience with coloured wrist bands - equivalent to the belt colours - for various reasons. Frankly speaking, I think that the biggest advantage here is provided by the obvious recognition of conceptual understanding. For example, take the situation of a team meeting... A member with a higher grade in CCD, say Green grade, sees that there are mainly Red grades to talk to, and adjusts her way of communication to their level of understanding. The choice of words might change as certain elements of CCD are not yet familiar to all team members. So instead of talking in an abstract way which only Green grades could follow the whole scenario comes down to Red grade level. Different story, better results... Similar to learning martial arts, we only covered two grades during this occasion - black and red. Most interestingly, there was quite some positive feedback and lots of questions about the principles and practices of the red grade. And we gathered real-world examples from various craftsman and discussed them. Following the Clean Code Developer Red Grade and some annotations from our meetup: CCD Red Grade - Principles Don't Repeat Yourself - DRY Keep It Simple, Stupid (and Short) - KISS Beware of Optimisations! Favour Composition over Inheritance - FCoI Interestingly most of the attendees already heard about those key words but couldn't really classify or categorize them. It's very similar to a situation in which you do not the particular for a thing and have to describe it to others... until someone tells you the actual name and suddenly all is very simple. CCD Red Grade - Practices Follow the Boy Scouts Rule Root Cause Analysis - RCA Use a Version Control System Apply Simple Refactoring Pattern Reflect Daily Introduction to the principles and practices of Clean Code Developer - here: Red Grade As for the various ToDo's we commonly agreed that the Boy Scout Rule clearly is not limited to software development or IT administration but applies to daily life in general. Same for the root cause analysis, btw. We really had good stories with surprisingly endings and conclusions. A quick check about who is using a version control system brought more drive into the conversation. Not only that we had people that aren't using any VCS at all, we also had the 'classic' approach of backup folders and naming conventions as well as the VCS 'junkie' that has to use multiple systems at a time. Just for the records: Git and GitHub seem to be in favour of some of the attendees. Regarding the daily reflection at the end of the day we came up with an easy solution: Wrap it up as a blog entry! Certifications in IT This is kind of a controversy in IT in general. Is it interesting to go for certifications or are they completely obsolete? What are the possibilities to get certified? What are the options we have in Mauritius? How would certificates stand compared to other educational tracks like Computer Science or Web Design. The ratio between craftsmen with certifications like MCP, MSTS, CCNA or LPI versus the ones without wasn't in favour for the first group but there was a high interest in the topic itself and some were really surprised to hear that exam preparations are completely free available online including temporarily voucher codes for either discounts or completely free exams. Furthermore, we discussed possible options on forming so-called study groups on a specific certificates and organising more frequent meetups in order to learn together. Taking into consideration that we have sponsored access to the video course material of Pluralsight (and now PeepCode as well as TrainSignal), we might give it a try by the end of the year. Current favourites are LPIC Level 1 and one of the Microsoft exams 40-78x. Feedback and ideas for the MSCC The closing conversations and discussions about how the MSCC is recently doing, what are the possibilities and what's (hopefully) going to happen in the future were really fertile and I made a couple of mental bullet points which I'm looking forward to tackle down together with orher craftsmen. Eventually, it might be a good option to elaborate on some issues during our weekly Code & Coffee sessions one Wednesday morning. Active discussion on various IT topics like certifications (LPI, MCP, CCNA, etc) and sharing experience Finally, we made it till the end of the planned time. Well, actually the talk was still on and we continued even after 16:00hrs. Unfortunately, we (the children and I) had to leave for evening activities. My resume of the day... It was great to have 15 craftsmen in one room. There are hundreds of IT geeks out there in Mauritius, and as Mauritius Software Craftsmanship Community we still have a lot of work to do to pass on the message to some more key players and companies. Currently, it seems that we are able to attract a good number of students in Computer Science... but we have a lot more to offer, even or especially for IT people on the job. I'm already looking forward to our next Saturday meetup in the near future. PS: Meetup pictures are courtesy of Nirvan Pagooah. Thanks for sharing...

    Read the article

  • CodePlex Daily Summary for Friday, March 12, 2010

    CodePlex Daily Summary for Friday, March 12, 2010New Projects.NET DEPENDENCY INJECTION: Abel Perez Enterprise FrameworkAutodocs - WCF REST Automatic API Documentation Generator: Autodocs is an automatic API documentation generator for .NET applications that use Windows Communication Foundation (WCF) to establish REST API's.BlockBlock: Block Block is a free game. You know Lumines and you will like BlockBlock.C4F XNA ASCII Post-Processing: This is the source code for the Coding4Fun article "XNA Effects – ASCII Art in 3D"ChequePrinter: this is ChequePrinterCompiladores MSIL usando Phoenix (PLP 2008.1 - CIn/UFPE): Este projeto foi feito com o intuito de explorar a plataforma Microsoft Phoenix para a construção de compiladores para MSIL de duas linguagens de E...CRM External View: CRM External View enables more robust control over exposing Microsoft CRM data (in a form of views) for external parties. The solution uses web ser...CS Project2: This is for the projectDotNetNuke IM Module of Facebook Like Messenger: Help you integrate 123 Web Messenger into DotNetNuke, and add a powerful 1-to-1 IM Software named "Facebook Messenger Style Web Chat Bar" at the bo...DotNetNuke® RadPanelBar: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (incl...DotNetNuke® Skin Blocks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Armand Datema of Schwingsoft. This skin uses a bit of jQu...Drilltrough and filtering on SSAS-cubes in SSRS: We will describe a technique to create Reporting services (SSRS) reports that use Analysis services (SSAS) cubes as data sources, have a very intu...Ecosystem Diagnosis & Treatment: The Ecosystem DIagnosis & Treatment community provides tools, analyses and applications of the medical model to natural resource problems. EDT sof...ExIf 35: A utility for use by film photographers for keeping track of critical facts about images taken on a roll of film, just as digital cameras do automa...FabricadeTI: Desenvolvimento do framework FabricadeTI.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in...Flash Nut: Flash Nut is a flash card program. You can build and review decks of flash cards. The project is a vs2008 wpf application.Free DotNetNuke Chat Module (Popup Mode): With this free DotNetNuke Chat Module (Popup Mode), master will assist to integrate DotNetNuke with 123 Flash Chat seamlessly, and add a popup mode...Free DotNetNuke IM of 123 Web Messenger -- Web-based Friend List: With this FREE application, you could integrate DNN website Database with 123 Web Messenger seamlessly and embed a web-based Friends List into anyw...Free DotNetNuke Live Help Module: With DotNetNuke Live Help Module, integrate 123 Live Help into DotNetNuke website and add Live Chat Button anywhere you like. Let visitors to chat ...G52GRP Videowall: NottinghamHappy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: The Happy Turtle project creates plugins for the Build Version Increment Add-In for Visual Studio (BVI). The focus is to automatically version asse...Hasher: Hasher es capaz de generar el hash MD5 y SHA de textos de hasta 100.000 caracteres y ficheros. También te permitirá comprobar dos hash para verifi...Infragistics Silverlight Extended Controls: This project is a group of controls that extend or add functionality to the Infragistics Silverlight control suite. This control requires Infragis...Insert Video Jnr: This is a baby version of my Video plugin, it is intended for Hosted Wordpress blogs only and shouldn't be used with other blog providers.jccc .NET smart framework: jccc .NET smart framework allows the creation of fast connections to MSSQL or MYSQL databases, and the data manipulation by using of c# class's tha...LytScript: 函数式脚本语言Microsoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): DDD NLayered App .NET 4.0 Example By Microsoft - Spain Domain Driven Design NLayered App .NET 4.0 Example Implementation Example of our local Arc...mimiKit: Lightweight ASP.NET MVC / Javascript Framework for creating mobile applications PHPWord: With PHPWord you can easily create a Word document with PHP. PHPWord creates docx Files that can include all major word functions like TextElements...Protocol Transition with BizTalk: An example solution the shows how todo Protocol Transition with BizTalk. This also shows you how to create a WCF extension to allow this to happen.Raid Runner: Raid Runner makes it easier to run and manage raid in World of Warcraft. It is a Silverlight application developed in c#SQL Server Authentication Troubleshooter: SQL Server Authentication Troubleshooter is a tool to help investigate a root cause of ‘Login Failed’ error in SQL Server. There could be number of...SuperviseObjects: SuperviseObjects consists of a collection which is derived from ObservableCollection<T>. This collection fires ItemPropertyChanging and ItemPropert...Viuto: Viuto.NET project aims to create a fully track and trace application. It is developed in: - Java & C: Firmware - C#: Parser - Asp.net: Tracki...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks is a collection that you cannot do without if you are serious about continous integration. Ever wish you could specify an...New ReleasesASP.NET: ASP.NET MVC 2 RTM: This release contains the source code for ASP.NET MVC 2 RTM as well as the ASP.NET MVC Futures project. The futures project contains features that ...C#Mail: Higuchi.Mail.dll (2010.3.11 ver): Higuchi.Mail.dll at 2010-3-11 version.C#Mail: Higuchi.MailServer.dll (2010.3.11 ver): Higuchi.MailServer.dll at 2010.3.11 version.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1 - Full Version: This is the full, complete example of the XNA ASCII FPS.C4F XNA ASCII Post-Processing: XNA ASCII FPS v1.0 - Base Project: This is the base project to be used by those who plan to follow along the Coding4Fun article.CRM External View: 1.0: Release 1.0DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3c: Alpha 3c upgrades custom/virtual uris (devpacks), temp uris, and zip packages. This is believed to be the first fully functional/performant release.DotNetNuke® RadPanelBar: DNNRadPanelBar 1.0.0: DNNRadPanelBar makes it easy to add telerik RadPanelBar functionality to your module or skin. Licensing permits anyone to use the components (inclu...Drilltrough and filtering on SSAS-cubes in SSRS: Release 1: Release 1ExIf 35: ExIf 35: Daily build of ExIf 35Family Tree Analyzer: Version 1.0.3.0: Version 1.0.3.0 Added options to check for updates on load and on help menu Disable use of US census for now until dealt with years being differen...Family Tree Analyzer: Version 1.0.4.0: Version 1.0.4.0 Added support for display of Ahnenfatel numbers Added filter to hide individuals from Lost Cousins report that have been flagged a...Flash Nut: Flash Nut 1.0 Setup: Flash Nut SetupFluent Validation for .NET: 1.2 RC: This is the release candidate for FluentValidation 1.2. If no bugs are found within the next couple of weeks, then this will become the 1.2 Final b...Free DotNetNuke Chat Module (Popup Mode): Download DNN Chat Module (Popup Mode)+Source Code: Feel free to download DotNetNuke Chat Module (Popup Mode), integrating DotNetNuke with 123 Flash Chat Software, and add a free popup mode flash cha...Free DotNetNuke Live Help Module: Download DNN Live Support Module and Source Code: In Readme file, there are detailed Installation and Integration Manual for you. This module is compatible with DotNetNuke v5.x.Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.44927: This is the first release of the SVN based version incrementor. How To InstallMake sure that Build Version Increment v2.2.10065.1524 or newer is i...Hasher: 1.0: Versión inicial de la aplicación: Obtención de hash MD5 y SHA. Codificación en tiempo real de textos de hasta 100.000 caracteres. Codificación ...Jamolina: PhotosynthDemo: PhotosynthDemoMapWindow GIS: MapWindow 6.0 msi (March 11): This fixes an PixelToProj problem for the Extended Buffer case, as well as adding fixes to the WKBFeatureReader to fix an X,Y reversal and some ext...Math.NET Numerics: 2010.3.11.291 Build: Latest alpha buildMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.5 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...MiniTwitter: 1.09.2: MiniTwitter 1.09.2 更新内容 修正 タイムラインを削除すると落ちるバグを修正 稀にタイムラインのスクロールが出来ないバグを修正Nestoria.NET: Nestoria.NET 0.8: Provides access to the Nestoria API. Documentation contains a basic getting started guide. Please visit Darren Edge's blog for ongoing developmen...Pod Thrower: Version 1.0: Here is version 1.0. It has all the features I was looking to do in it. Please let me know if you use this and if you would like any changes.SharePoint Ad Rotator: SPAdRotator 2.0 Beta: This new release of the Ad Rotator contains many new features. One major new feature is that jQuery has been added to do image rotation without hav...SharePoint Objects: Democode Ton Stegeman: These download contains sample code for some SharePoint 2007 blog posts: TST.Themes_Build20100311.zip contains a feature receiver that registers Sh...SharePoint Taxonomy Extensions: SharePoint Taxonomy Extensions 1.2: Make Taxonomy Extensions useable in every list type. Not only in document libraries.SharePoint Video Player Web Part & SharePoint Video Library: Version 3.0.0: Absolutely killer feature - installing multiple players on a page without any loss of performance.SilverLight Interface for Mapserver: SLMapViewer v. 1.0: SLMapviewer sample application version 1.0. This new release includes the following enhancements: Silverlight 3.0 native Added a new init parame...Spark View Engine: Spark v1.1: Changes since RC1Built against ASP.NET MVC 2 RTMSPSS .NET interop library: 2.0: This new version supports SPSS 15, and includes spssio32.dll and other native .dll dependencies so that it works out of the box without SPSS being ...stefvanhooijdonk.com: SharePoint2010.ProfilePicturesLoader: So, with the help of Reflector, I wrote a small tool that would import all our profile pictures and update the user profiles. http://wp.me/pMnlQ-6G SuperviseObjects: SuperviseObjects 1.0: First releaseTortoiseSVN Addin for Visual Studio: TortoiseSVN Addin 1.0.5: Feature: Visual Studio/svn action synchronization on Item in Solution explorer like add, move, delete and rename. Note: Move action does not rememb...VCC: Latest build, v2.1.30311.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.0.4: Business Management ■This release fixes a Could not load type error on the main view of the module. Groups ■Group requests were failing in some i...WikiPlex – a Regex Wiki Engine: WikiPlex 1.3: Info: Official Version: 1.3.0.215 | Full Release Notes Documentation - This new documentation includes Full Markup Guide with Examples Articles ...Zealand IT MSBuild Tasks: Zealand IT MSBuild Tasks: Initial beta release of Zealand IT MSBuild Tasks. Contains the following tasks: RunAs - Same as Exec task, but provides parameters for impersonat...ZoomBarPlus: V1 (Beta): This is the initial release. It should be considered a beta test version as it has not been tested for very long on my device.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrN2 CMSBlogEngine.NETFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryFarseer Physics EngineCaliburn: An Application Framework for WPF and SilverlightSharePoint Team-Mailer

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • SQLAuthority News – Ahmedabad Tech Ed On Road June 11, 2011 – An Event to Remember – A Grand Success of Community Tech Days

    - by pinaldave
    I am very excited to announce the huge success of the Microsoft Community TechDays at Ahmedabad, on 11 June 2011.  The turn-out for this seminar was huge, and there was a great response from the audience.  In fact, the AMA where the conference was held can seat 275 people – but there were over 50 people standing, the event coordinators had to find 150 more chairs, and we even had to turn away 30 people at the door because there was just no more room.  This means that there were over 500 attendees! The event started right on time, at 10 am, with my introduction and welcome to the audience.  My presentation on my favorite subject of “SQL Server Performance Troubleshooting Using Waits and Queues.”  Because of the number of speakers, I had to cut my presentation short by 10 minutes, so I only had 50 minutes to explain how to use swaits and queues to fine tune performance.  There was a good response to my talk from audience. I feel the best presentation, though, was “HTML5 – Future of the Web” by Harish Vaidyanathan.  He explained how HTML5 is going to change the internet, and taught everyone a lot about how to best use Internet Explorer 9, and discussed CSS3, SVG and DOM specifications.  Many people in the audience came specifically for this session – many had to take a half day leave off work just to travel there. At this point we all took a break for lunch, but there was no one taking a nap with a full stomach because we had a presentation of the new Windows Mango phone from Dhananjay Kumar.  New technology like this always wakes everyone up! After this came “TSQL Worst Practices” by Jacob Sebastian.  He too had to cut his talk short by 10 minutes in order to accommodate everyone, but his discussion of what SQL queries to avoid was still excellent. He is magnificent presenter and Ahmedabad loves him. The final presentation was “ASP.NET Tips and Tricks” by Tejas Shah.  This was a good overview of asp.net fundamentals, and how to use them to improve application performance.  However, the day was not over here!  We kept the audience entertained with prizes and give-aways.  Names were drawn for prizes and there was a quiz session with great gifts for the winners. Overall, the day was a huge success.  There was a good mix of SQL and non-SQL subjects, and many audiences members commented on how much they learned.  We had a much bigger turn-out than expected – all the chairs were filled 45 minutes before we even started!  For our next conference we need to find a space that will hold everyone, especially since we are hoping to have 600-800 people attending.  We definitely feel we can reach this goal.  We are already looking forward to the next Ahmedabad Microsoft Community TechDays. Download presentations: HTML5 Beauty of Web -By Harish Vaidyanathan TSQL Worst Practices- By Jacob Sebastian SQL SERVER Performance troubleshooting using Waits and Queues -By Pinal Dave ASP.NET Tips and Tracks -By Tejas Shah Other reports: Tech-Ed on Road 2011- Ahmedabad–A great event- By Jalpesh Tech-Ed 2011 on the Road in Ahmedabad – by Ritesh Shah Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL 2005 Transaction Rollback Hung–unresolved deadlock

    - by steveh99999
    Encountered an interesting issue recently with a SQL 2005 sp3 Enterprise Edition system. Every weekend, a full database reindex was being run on this system – normally this took around one and a half hours. Then, one weekend, the job ran for over 17 hours  - and had yet to complete... At this point, DBA cancelled the job. Job status is now cancelled – issue over…   However, cancelling the job had not killed the reindex transaction – DBCC OPENTRAN was still showing the transaction being open. The oldest open transaction in the database was now over 17 hours old.  Consequently, transaction log % used growing dramatically and locks still being held in the database... Further attempts to kill the transaction did nothing. ie we had a transaction which could not be killed. In sysprocesses, it was apparent the SPID was in rollback status, but the spid was not accumulating CPU or IO. Was the SPID stuck ? On examination of the SQL errorlog – shortly after the reindex had started, a whole bunch of deadlock output had been produced by trace flag 1222. Then this :- spid5s      ***Stack Dump being sent to   xxxxxxx\SQLDump0042.txt spid5s      * ******************************************************************************* spid5s      * spid5s      * BEGIN STACK DUMP: spid5s      *   12/05/10 01:04:47 spid 5 spid5s      * spid5s      * Unresolved deadlock spid5s      * spid5s      *   spid5s      * ******************************************************************************* spid5s      * ------------------------------------------------------------------------------- spid5s      * Short Stack Dump spid5s      Stack Signature for the dump is 0x000001D7 spid5s      External dump process return code 0x20000001. Unresolved deadlock – don’t think I’ve ever seen one of these before…. A quick call to Microsoft support confirmed the following bug had been hit :- http://support.microsoft.com/kb/961479 So, only option to get rid of the hung spid – to restart SQL Server… Fortunately SQL Server restarted without any issues. I was pleasantly surprised to see that recovery on this particular database was fast. However, restarting SQL Server to fix an issue is not something I would normally rush to do... Short term fix – the reindex was changed to use MAXDOP of 1. Longer term fix will be to apply the correct CU, or wait for SQL 2005 sp 4 ?? This should be released any day soon I hope..

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • The Latest News About SAP

    - by jmorourke
    Like many professionals, I get a lot of my news from Google e-mail alerts that I’ve set up to keep track of key industry trends and competitive news.  In the past few weeks, I’ve been getting a number of news alerts about SAP.  Below are a few recent examples: Warm weather cuts short US maple sugaring season – by Toby Talbot, AP MILWAUKEE – Temperatures in Wisconsin had already hit the high 60s when Gretchen Grape and her family began tapping their 850 maple trees. They had waited for the state's ceremonial tapping to kick off the maple sugaring season. It was moved up five days, but that didn't make much difference. For Grape, the typically month-long season ended nine days later. The SAP had stopped flowing in a record-setting heat wave, and the 5-quart collection bags that in a good year fill in a day were still half-empty. Instead of their usual 300 gallons of syrup, her family had about 40. Maple syrup producers across the North have had their season cut short by unusually warm weather. While those with expensive, modern vacuum systems say they've been able to suck a decent amount of sap from their trees, producers like Grape, who still rely on traditional taps and buckets, have seen their year ruined. "It's frustrating," said the 69-year-old retiree from Holcombe, Wis. "You put in the same amount of work, equipment, investment, and then all of a sudden, boom, you have no SAP." Home & Garden: Too-Early Spring Means Sugaring Woes  - by Georgeanne Davis for The Free Press Over this past weekend, forsythia and daffodils were blooming in the southern parts of the state as temperatures climbed to 85 degrees, and trees began budding out, putting an end to this year's maple syrup production even as the state celebrated Maine Maple Sunday. Maple sugaring needs cold nights and warm days to induce SAP flows. Once the trees begin budding, SAP can still flow, but the SAP is bitter and has an off taste. Many farmers and dairymen count on sugaring for extra income, so the abbreviated season is a real financial loss for them, akin to the shortened shrimping season's effect on Maine lobstermen. SAP season comes to a sugary Sunday finale – Kennebec Journal, March 26th, 2012 Rebecca Manthey stood out in the rain at the entrance of Old Fort Western keeping watch over a cast iron kettle of boiling SAP hooked to a tripod over a wood fire.  Manthey and the rest of the Old Fort Western staff -- decked out in 18th-century attire -- joined sugar houses across the state in observance of Maine Maple Sunday. The annual event is sponsored by the Department of Agriculture and the Maine Maple Producers Association.  She said the rain hadn't kept people from coming to enjoy all the events at the fort surrounding the production of Maple syrup.  "In the 18th century, you would be boiling SAP in the woods, so I would be in the woods," Manthey explained to the families who circled around her. "People spent weeks and weeks in the woods. You don't want to cook it to fast or it would burn. When it looks like the right consistency then you send it (into the kitchen) to be made into sugar." Manthey said she enjoyed portraying an 18th-century woman, even in the rain, which didn't seem to bother visitors either. There was a steady stream of families touring the fort and enjoying the maple syrup demonstrations. I hope you enjoy these updates on SAP – Happy April Fool’s Day!

    Read the article

  • Dark Sun Dispatch 001

    - by Chris Williams
    If you aren't into tabletop (aka pen & paper) RPGs, you might as well click to the next post now... Still here? Awesome. I've recently started running a new D&D 4.0 Dark Sun campaign. If you don't know anything about Dark Sun, here's a quick intro: The campaign take place on the world of Athas, formerly a lush green world that is now a desert wasteland. Forests are rare in the extreme, as is water and metal. Coins are made of ceramic and weapons are often made of hardened wood, bone or obsidian. The green age of Athas was centuries ago and the current state was brought about through the reckless use of sorcerous magic. (In this world, you can augment spells by drawing on the life force of the world & people around you. This is called defiling. Preserving magic draws upon the casters life force and does not damage the surrounding world, but it isn't as powerful.) Humans are pretty much unchanged, but the traditional fantasy races have changed quite a bit. Elves don't live in the forest, they are shifty and untrustworthy desert traders known for their ability to run long distances through the wastes. Halflings are not short, fat, pleasant little riverside people. Instead they are bloodthirsty feral cannibals that roam the few remaining forests and ride reptilians beasts akin to raptors. Gnomes are extinct, as are orcs. Dwarves are mostly farmers and gladiators, and live out in the sun instead of staying under the mountains. Goliaths are half-giants, not known for their intellect. Muls are a Dwarf & Human crossbreed that displays the best traits of both races (human height and dwarven stoutness.) Thri-Kreen are sentient mantis people that are extremely fast. Most of the same character classes are available, with a few new twists. There are no divine characters (such as Priests, Paladins, etc) because the gods are gone. Nobody alive today can remember a time when they were still around. Instead, some folks worship the elemental forces (although they don't give out spells.) The cities are all ruled by Sorcerer King tyrants (except one city: Tyr) who are hundreds of years old and still practice defiling magic whenever they please. Serving the Sorcerer Kings are the Templars, who are also defilers and psionicists. Crossing them is as bad, in many cases, as crossing the Kings themselves. Between the cities you have small towns and trading outposts, and mostly barren desert with sometimes 4-5 days on foot between towns and the nearest oasis. Being caught out in the desert without adequate supplies and protection from the elements is pretty much a death sentence for even the toughest heroes. When you add in the natural (and unnatural) predators that roam the wastes, often in packs, most people don't last long alone. In this campaign, the adventure begins in the (small) trading fortress of Altaruk, a couple weeks walking distance from the newly freed city of Tyr. A caravan carrying trade goods from Altaruk has not made it to Tyr and the local merchant house has dispatched the heroes to find out what happened and to retrieve the goods (and drivers) if possible. The unlikely heroes consist of a human shaman, a thri-kreen monk, a human wizard, a kenku assassin and a (void aspect) genasi swordmage. Gathering up supplies and a little liquid courage, they set out into the desert and manage to find the northbound tracks of the wagon. Shortly after finding the tracks, they are ambushed by a pack of silt-runners (small lizard people with very large teeth and poisoned pointy spears.) The party makes short work of the creatures, taking a few minor wounds in the process. Proceeding onward without resting, they find the remains of the wagon and manage to sneak up on a pack of Kruthiks picking through the rubble and spilled goods. Unfortunately, they failed to take advantage of the opportunity and had a hard fight ahead of them. The party defeated the kruthiks, but took heavy damage (and almost lost a couple of their own) in the process. Once the kruthiks were dispatched, they followed a set of tracks further north to a ruined tower...

    Read the article

  • GWB | May 2010 Newsletter

    - by Staff of Geeks
    Geekswithblogs.net - May 2010 Newsletter   Was your newsletter messed up? If the first newsletter we sent was all white with links all over the place, I apologize. We haven’t sent a newsletter in quite some time and I forgot the rules of theming for email clients.   TechEd 2010 New Orleans - Who is coming? John and Jeff will be at TechEd 2010 New Orleans for a short time to pass out shirts to our bloggers. These are the standard GWB shirts you see some of our other bloggers wearing and one could be yours if you let us know you are coming and what your size is. We will announce where we will be once we arrive so you can wear your Geekswithblogs.net shirt and let everyone know where you blog. These shirts will be without URLs, if you want one of those, you will have to join the contest mentioned later in the newsletter. Deadline for response is next Wednesday? Email [email protected] to let us know you are coming!   wblo.gs Shortened Url Everyone has URL shorteners these days so we needed our own. Currently Geekswithblogs.net has implemented http://wblo.gs as a shortened URL for you blog. Instead of pushing people to geekswithblogs.net/your_url you can now use wblo.gs/your_url. We hope to published a version of the shortener that will give each post a short URL and a button via your blog to publish to Twitter. This release should be available in the early summer time period.   Always wanted one of our awesome Geekswithblogs.net T-Shirts? Starting on May 15th until July 13th, we will giving our Geekswithblogs.net t-shirts away to those members who create 30 technical posts. That is 30 posts in 60 days! This should not be a difficult task with all the new releases from Microsoft. You have Visual Studio 2010, Team Foundation 2010, SharePoint 2010, Silverlight 4.0, Office 2010, SQL Server 2008 R2, and so many others to pick from. We will be keeping an eye on the results and publishing those bloggers who successfully meet the requirements by the date. A few rules to remember. The post must be technical in nature, the post must be original content created by the blogger, and the post must be originated during the contest (no pulling content from you old blog). Each shirt will be customized with your URL on the back and you can let us know you size and address when you successfully complete the contest.   What features do you want to see? We would love to hear your feedback. It has been awhile since we have published a release of Geekswithblogs.net and we want to get your feature requests in it. Do you like a particular skin we don’t have? Do you want to see comments change? Do you want to see more Twitter integration? Whatever it is, we want to know. Submit your thoughts to [email protected] and we will start a dialog with you shortly after we receive them.   Follow Us on Twitter We want to hear from you and make sure you know what is going on with Geekswithblogs.net. Follow the staff on Twitter (@StaffOfGeeks) and feel free to let us know what you think or questions you have here.

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >