Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 82/532 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Help Desk Database Design

    - by user237244
    The company I work at has very specific and unique needs for a help desk system, so none of the open source systems will work for us. That being the case, I created a custom system using PHP and MySQL. It's far from perfect, but it's infinitely better than the last system they were using; trust me! It meets most of our needs quite nicely, but I have a question about the way I have the database set up. Here are the main tables: ClosedTickets ClosedTicketSolutions Locations OpenTickets OpenTicketSolutions Statuses Technicians When a user submits a help request, it goes in the "OpenTickets" table. As the technicians work on the problem, they submit entries with a description of what they've done. These entries go in the "OpenTicketSolutions" table. When the problem has been resolved, the last technician to work on the problem closes the ticket and it gets moved to the "ClosedTickets" table. All of the solution entries get moved to the "ClosedTicketSolutions" table as well. The other tables (Locations, Statuses, and Technicians) exist as a means of normalization (each location, status, and technician has an ID which is referenced). The problem I'm having now is this: When I want to view a list of all the open tickets, the SQL statement is somewhat complicated because I have to left join the "Locations", "Statuses", and "Technicians" tables. Fields from various tables need to be searchable as well. Check out how complicated the SQL statement is to search closed tickets for tickets submitted by anybody with a first name containing "John": SELECT ClosedTickets.*, date_format(ClosedTickets.EntryDate, '%c/%e/%y %l:%i %p') AS Formatted_Date, date_format(ClosedDate, '%c/%e/%y %l:%i %p') AS Formatted_ClosedDate, Concat(Technicians.LastName, ', ', Technicians.FirstName) AS TechFullName, Locations.LocationName, date_format(ClosedTicketSolutions.EntryDate, '%c/%e/%y') AS Formatted_Solution_EntryDate, ClosedTicketSolutions.HoursSpent AS SolutionHoursSpent, ClosedTicketSolutions.Tech_ID AS SolutionTech_ID, ClosedTicketSolutions.EntryText FROM ClosedTickets LEFT JOIN Technicians ON ClosedTickets.Tech_ID = Technicians.Tech_ID LEFT JOIN Locations ON ClosedTickets.Location_ID = Locations.Location_ID LEFT JOIN ClosedTicketSolutions ON ClosedTickets.TicketNum = ClosedTicketSolutions.TicketNum WHERE (ClosedTickets.FirstName LIKE '%John%') ORDER BY ClosedDate Desc, ClosedTicketSolutions.EntryDate, ClosedTicketSolutions.Entry_ID One thing that I'm not able to do right now is search both open and closed tickets at the same time. I don't think a union would work in my case. So I'm wondering if I should store the open and closed tickets in the same table and just have a field indicating whether or not the ticket is closed. The only problem I can forsee is that we have so many closed tickets already (nearly 30,000) so the whole system might perform slowly. Would it be a bad idea to combine the open and closed tickets?

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Professional WordPress Business Themes

    - by Matt
    Every now and then JustSkins.com receives quote requests for WordPress design for business websites. Most companies now keep up to date with a blog on their corporate website, that showcases their day to day activities & progresses.  Getting such professional wordpress driven website designed from the scratch costs you a lot. If you have decided to make WordPress the CMS for your business website, there are some Professional WordPress themes you can take a look at. We have created this list to help you save some time to do all the trying and the testing. Optimize by WooThemes Last year one of the most popular Business theme by WooThemes was the Coffee Break theme, Optimize is further adaptation of the same. It is simple, sleek design with great functionality. The customizable front page lets you showcase your work or product etc. Demo | Price: $70, Developer Price: $150 | DOWNLOAD WooThemes is also offering their whole Business theme pack for a very very reasonable fee, If you like multiple designs from them you can get this big deal for only $125 Onyx , Impacto by Simple Themes Simple Themes has been making very crisp & beautiful WordPress Themes & are also very reasonably priced. If their themes solve your purpose $39 membership for 3 months is a good deal.  If you are looking to create quick website, landing page or micro site their templates are best. Demo | Price: $39 for 3 Months Membership Rejuvenate by Templatic One of the most beautiful Premium WordPress Theme, Available in 4 elegant color schemes. This theme can be used for your Beauty, Spa and Studio Business. Demo | Price: $65  | DOWNLOAD Templatic has created great professional business templates, such as Gourmet, Real Estate, Job Board, Automobile & lots More. You can also get a Best Value Offer in $299 for all of Templatic Themes. TheProfessional by ElegantThemes Elegant Themes is known to provide very beautiful & straightforward designs. The professional wordpress theme is a simple, crisp & concise Theme you can use to create a business website. The 3 short blurbs on the homepage are simple, which can be used to point them to your major offerings and the prominent slider indicates a clear call to action. There are 52 themes to choose from & Elegant Themes is giving a great offer at such a small yearly fee. Demo | Price: $39 Yearly Membership  | DOWNLOAD Elegant Themes has a cluster of 52 magnificent themes, and all you have to do is pay $39 to win access to all of them. Join today! Some of the Professional designs that I like for a business website are SimplePress and Corporation. Extatic by Chimera Themes The theme includes plenty of great features including custom feature tour pages, portfolio sections, static feature areas, pricing table page, 20+ shortcodes, multiple page/post options, unlimited custom sidebars which can be assigned to posts/pages, advanced theme style editor and options page and much more. Its a must buy Demo | Price: $37 | DOWNLOAD Corporate by Clover Themes Simple Theme for a small business. Corporate is an clean, powerful and feature-rich corporate theme with dynamic and energy design. Demo | Price: $69.95 | DOWNLOAD Bizco by Themify Bizco is a very professional template for wordpress targeted at corporate and product based businesses. This theme is simple yet highly functional and is suitable for showcasing features of your service or product. With the custom page template you can change the display of your pages and posts easily with our visual custom panel. Demo | Price: $70  |DOWNLOAD Devision by Themetrust Devision is a small business wordpress theme that can be used to make a business website within a few minutes. It makes it very easy to showcase and highlight your services or product on the homepage. Demo | Price: Euro 39 | DOWNLOAD BizPress by WPZoom A professional business WordPress theme from WPZoom suitable for companies, organizations, product showcases or other business websites. The theme comes with 4 colour options, featured products / services slider on the homepage, drop down menus, theme options page etc. Demo | Price: $ 69 | DOWNLOAD Clean Classy Corporate by ThemeFuse A very impressive WordPress business theme, that can be used in multiple ways. It is suitable for many kinds, like web products, services, hosting etc etc. Clean Classy Corporate WordPress Theme has a clean crisp look and is professional in appeal. Demo | Price: $49  | DOWNLOAD Insdustry by ThemeJam A powerful Business WordPress Template along with lots of options, colors, and customizable features. This is one for almost any kind of blogger, corporate, or organization. Lots of features, gives it the kind of scalability you might need to create any kind of website. Demo | Price: $ 59 | DOWNLOAD AppPress by ChimeraThemes This professional business WordPress theme includes 5 different colour schemes, advanced theme options page, multiple homepage sliders, custom widgets and page templates. The theme also includes a range of other unique features such as custom title, live style editor to modify colours, font styles, sizes etc, and 20+ shortcodes for creating pricing tables, content columns, boxes, buttons and others. Demo | Price: $ 37 | DOWNLOAD Why WordPress Professional Template? You can modify them, these usually come with a lot of fancy features that enable you to create the website as per your usability & choice. In some cases the  Premium WordPress business themes can be accessed through a subscription service. Premium Vs Free WordPress Themes There are very good Free WordPress themes out there that you can use to modify and code further or create what you want, but this possible when you are technically able. On the contrary Premium WordPress business themes offers great features & can save you a lot of time and money. It varies from business to business, some like to keep their website simple while most want to keep cool nifty features and abilities to scale it differently for various sections, products or categories. All this & more is possible with a Professional Business theme that is suitable/close to your needs.

    Read the article

  • [Windows 8] Update TextBox’s binding on TextChanged

    - by Benjamin Roux
    Since UpdateSourceTrigger is not available in WinRT we cannot update the text’s binding of a TextBox at will (or at least not easily) especially when using MVVM (I surely don’t want to write behind-code to do that in each of my apps !). Since this kind of demand is frequent (for example to disable of button if the TextBox is empty) I decided to create some attached properties to to simulate this missing behavior. namespace Indeed.Controls { public static class TextBoxEx { public static string GetRealTimeText(TextBox obj) { return (string)obj.GetValue(RealTimeTextProperty); } public static void SetRealTimeText(TextBox obj, string value) { obj.SetValue(RealTimeTextProperty, value); } public static readonly DependencyProperty RealTimeTextProperty = DependencyProperty.RegisterAttached("RealTimeText", typeof(string), typeof(TextBoxEx), null); public static bool GetIsAutoUpdate(TextBox obj) { return (bool)obj.GetValue(IsAutoUpdateProperty); } public static void SetIsAutoUpdate(TextBox obj, bool value) { obj.SetValue(IsAutoUpdateProperty, value); } public static readonly DependencyProperty IsAutoUpdateProperty = DependencyProperty.RegisterAttached("IsAutoUpdate", typeof(bool), typeof(TextBoxEx), new PropertyMetadata(false, OnIsAutoUpdateChanged)); private static void OnIsAutoUpdateChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e) { var value = (bool)e.NewValue; var textbox = (TextBox)sender; if (value) { Observable.FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>( o => textbox.TextChanged += o, o => textbox.TextChanged -= o) .Do(_ => textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text)) .Subscribe(); } } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is composed of two attached properties. The first one “RealTimeText” reflects the text in real time (updated after each TextChanged event). The second one is only used to enable the functionality. To subscribe to the TextChanged event I used Reactive Extensions (Rx-Metro package in Nuget). If you’re not familiar with this framework just replace the code with a simple: textbox.TextChanged += textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use these attached properties, it’s fairly simple <TextBox Text="{Binding Path=MyProperty, Mode=TwoWay}" ic:TextBoxEx.IsAutoUpdate="True" ic:TextBoxEx.RealTimeText="{Binding Path=MyProperty, Mode=TwoWay}" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Just make sure to create a binding (in TwoWay) for both Text and RealTimeText. Hope this helps !

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • Doing unit and integration tests with the Web API HttpClient

    - by cibrax
    One of the nice things about the new HttpClient in System.Net.Http is the support for mocking responses or handling requests in a http server hosted in-memory. While the first option is useful for scenarios in which we want to test our client code in isolation (unit tests for example), the second one enables more complete integration testing scenarios that could include some more components in the stack such as model binders or message handlers for example.   The HttpClient can receive a HttpMessageHandler as argument in one of its constructors. public class HttpClient : HttpMessageInvoker { public HttpClient(); public HttpClient(HttpMessageHandler handler); public HttpClient(HttpMessageHandler handler, bool disposeHandler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } For the first scenario, you can create a new HttpMessageHandler that fakes the response, which you can use in your unit test. The only requirement is that you somehow inject an HttpClient with this custom handler in the client code. public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In an unit test, you can do something like this. var fakeResponse = new HttpResponse(); var fakeHandler = new FakeHttpMessageHandler(fakeResponse); var httpClient = new HttpClient(fakeHandler); var customerService = new CustomerService(httpClient); // Do something // Asserts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CustomerService in this case is the class under test, and the one that receives an HttpClient initialized with our fake handler. For the second scenario in integration tests, there is a In-Memory host “System.Web.Http.HttpServer” that also derives from HttpMessageHandler and you can use with a HttpClient instance in your test. This has been discussed already in these two great posts from Pedro and Filip. 

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Record Screen Activity with CamStudio

    - by Asian Angel
    Sometimes a visual demonstration works much better than a list of instructions. If you need to make a demo video for family and/or friends then you might want to have a look at CamStudio. Using CamStudio To get properly set up you will need to install two different files (the main program followed by the codec). Once that is done you are ready to get started. When you start the program you will see a surprisingly small window. Notice the highlighted Record to text…it serves as a visual indicator for the video type selected for recording. Before you start creating a video it would be a good idea to look through some of the settings. The first one to look at is the region or area that you want to record. Next you will want to look through the video options since these will affect the quality and final size of your video files. The default setting for quality is 70…adjust that to the level that best suits your needs. Note: For our example we maxed out the various video settings for best quality. On our system Microsoft Video 1 was listed as the default compressor but as you can see there were other options available. You can configure the settings for the compressor you want to use if desired. Keep in mind that each compressor will have unique settings of their own, so if you change it, be certain to go back and check. We decided to use the CamStudio Lossless Codec for our example (it gave the best results while trying the software). Going back to the main window you can toggle back and forth between .avi and .swf output using the last button. Once you are satisfied with the settings click on the red record button to start. If you need to pause while recording or stop recording click on the system tray icon and select the appropriate command. When you are finished recording, you will be presented with the save file window. Browse for the desired save location and name your new file. Once you have saved the file the movie player window will automatically open so that you view your new video. Our sample video shown here is at 50% of original size so may look slightly “gritty”. The detail was much better at 100%. If you decide to record and save as .swf the process will be identical to recording in .avi format until the movie player window opens. At that time the conversion process from .avi to .swf will begin. When complete you will have a new flash video and html file that goes with it. Depending on which browser you have set as default, you may run into a small problem when the preview for your new .swf file tries to open. There is a small bug in the generated html file. You can use this work-around or… Just open the .swf file directly in your favorite browser. Conclusion CamStudio may not produce the highest quality videos, but it’s free and does a very nice job nonetheless. If you are working on a tight budget or only need to make an occasional video then CamStudio is a very sensible choice. Links Download CamStudio Stable Version & CamStudio Codec *Download links are approximately half-way down the page. Download CamStudio Stable Version & CamStudio Codec at SourceForge *Beta version also available here. Similar Articles Productive Geek Tips Get the Classic Style Network Activity Indicator Back in Windows 7How To Copy a DVD with VLC 1.0ALLCapture 3.0 [Review]Listen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer Radio TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate

    Read the article

  • Populate a WCF syndication podcast using MP3 ID3 metadata tags

    - by brian_ritchie
    In the last post, I showed how to create a podcast using WCF syndication.  A podcast is an RSS feed containing a list of audio files to which users can subscribe.  The podcast not only contains links to the audio files, but also metadata about each episode.  A cool approach to building the feed is reading this metadata from the ID3 tags on the MP3 files used for the podcast. One library to do this is TagLib-Sharp.  Here is some sample code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: var taggedFile = TagLib.File.Create(f); 2: var fileInfo = new FileInfo(f); 3: var item = new iTunesPodcastItem() 4: { 5: title = taggedFile.Tag.Title, 6: size = fileInfo.Length, 7: url = feed.baseUrl + fileInfo.Name, 8: duration = taggedFile.Properties.Duration, 9: mediaType = feed.mediaType, 10: summary = taggedFile.Tag.Comment, 11: subTitle = taggedFile.Tag.FirstAlbumArtist, 12: id = fileInfo.Name 13: }; 14: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 15: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); This reads the ID3 tags into an object for later use in creating the syndication feed.  When the MP3 is created, these tags are set...or they can be set after the fact using the Properties dialog in Windows Explorer.  The only "hack" is that there isn't an easily accessible tag for "subtitle" or "published date" so I used other tags in this example. Feel free to change this to meet your purposes.  You could remove the subtitle & use the file modified data for example. That takes care of the episodes, for the feed level settings we'll load those from an XML file: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <?xml version="1.0" encoding="utf-8" ?> 2: <iTunesPodcastFeed 3: baseUrl ="" 4: title="" 5: subTitle="" 6: description="" 7: copyright="" 8: category="" 9: ownerName="" 10: ownerEmail="" 11: mediaType="audio/mp3" 12: mediaFiles="*.mp3" 13: imageUrl="" 14: link="" 15: /> Here is the full code put together. Read the feed XML file and deserialize it into an iTunesPodcastFeed classLoop over the files in a directory reading the ID3 tags from the audio files .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static iTunesPodcastFeed CreateFeedFromFiles(string podcastDirectory, string podcastFeedFile) 2: { 3: XmlSerializer serializer = new XmlSerializer(typeof(iTunesPodcastFeed)); 4: iTunesPodcastFeed feed; 5: using (var fs = File.OpenRead(Path.Combine(podcastDirectory, podcastFeedFile))) 6: { 7: feed = (iTunesPodcastFeed)serializer.Deserialize(fs); 8: } 9: foreach (var f in Directory.GetFiles(podcastDirectory, feed.mediaFiles)) 10: { 11: try 12: { 13: var taggedFile = TagLib.File.Create(f); 14: var fileInfo = new FileInfo(f); 15: var item = new iTunesPodcastItem() 16: { 17: title = taggedFile.Tag.Title, 18: size = fileInfo.Length, 19: url = feed.baseUrl + fileInfo.Name, 20: duration = taggedFile.Properties.Duration, 21: mediaType = feed.mediaType, 22: summary = taggedFile.Tag.Comment, 23: subTitle = taggedFile.Tag.FirstAlbumArtist, 24: id = fileInfo.Name 25: }; 26: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 27: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); 28: feed.Items.Add(item); 29: } 30: catch 31: { 32: // ignore files that can't be accessed successfully 33: } 34: } 35: return feed; 36: } Usually putting a "try...catch" like this is bad, but in this case I'm just skipping over files that are locked while they are being uploaded to the web site.Here is the code from the last couple of posts.  

    Read the article

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • Top ten things that don't make sense in The Walking Dead

    - by iamjames
    For those of you that don't know, The Walking Dead is a popular American TV show on AMC about a group of people trying to survive in a zombie-filled world.Here's the top ten eleven things that don't make sense on the show (and have never been explained) 1)  They never visit stores.  No Walmarts, Kmarts, Targets, shopping malls, pawn shops, gas stations, etc.  You'd think that would be the first place you'd visit for supplies, but they never have.  Not once.  There was a tiny corner store they visited in a small town, and while many products were already gone they did find several useful items.  2)  They never raid houses.  Why not?  One would imagine that they would want to search houses for useful items, but they don't.3)  They don't use 2 way radios.  Modern 2-way radios have a 36-mile range.  That's probably best possible range, but even if the range is only 10% of that, 3.6 miles, that's still more than enough for most situations, for the occasional "hey zombies attacking can you give me a hand?" or "there's zombies walking by stay inside until they leave" or "remember to pick up milk at the store love mom".  And yes they would need batteries or recharging, but they have been using gas-powered generators on the show and I'm sure a car charger would work.4)  They use gas-guzzling vehicles.  Every vehicle they have is from the 80s or 90s except for the new Kia SUV there for product placement.  Why?  They should all be driving new small SUVs or hybrids.  Visit a dealership and steal more fuel-efficient vehicles, because while the Walmart's might be empty from people raiding them for supplies, I'm sure most people weren't thinking "Gee, I should go car shopping" when the infection hit5)  They drive a motorcycle.  Seriously?  Let's find the least protective vehicle and drive that.  And while motorcycles get reasonable gas mileage, 5 people in a SUV gets better gas mileage per person than 5 people all driving motorcycles so it doesn't make economical sense either.6)  They drive loud vehicles.  The motorcycle used is commonly referred to as a chopper and is about as loud as a motorcycle can get.  The zombies are attracted to loud noise, so wouldn't it make more sense to drive vehicles that makes less sound?  Because as soon as you stop the bike and get off you're surrounded by zombies that heard you coming.  And it's not just the bike, the ~1980s Chevy SUV in the show is also very loud.7)  They never run out of food.  Seems like that would be a almost daily struggle, keeping enough food available for about a dozen people, yet I've never seen them visit a grocery store or local convenience store to stock up.8)  They don't carry swords, machetes, clubs, etc.  Let's face it, biting is not a very effective means of attack.  It's good for animals because they have fangs and little else, but humans have been finding better ways of killing each other since forever.  So why doesn't everyone on the show carry a sword or machete or at least a baseball bat?  Anything is better than wasting valuable bullets all the time.  Sure, dozen zombies approaching?  Shoot them.  One zombie approaching?  Save the bullet, cut off it's head.  9)  They do not wear protective clothing.  Human teeth are not exactly the sharpest teeth in the animal kingdom.  The leather shoes your dog ripped to shreds within minutes would probably take you days to bite through.  So why do they walk around half-naked?  Yes I know it's hot in Atlanta, but you'd think they'd at least have some tough leather coats or something for protection.  Maybe put a few small vent holes in the fabric if it's really hot.  Or better:  make your own chainmail.  Chainmail was used for thousands of years for protection from swords and is still used by scuba divers for protection from sharks.  If swords and sharks can't puncture it, human teeth don't stand a chance.  10)  They don't build barricades or dig trenches around properties.  In Season 2 they stayed at a farm in the middle of no where.  While being far away from people is a great way to stay far away from zombies, it would still make sense to build some sort of defenses.  Hordes of zombies would knock down almost any fence, but what about a trench or moat?  Maybe something not too wide so it can be jumped over easily but a zombie would fall into because I haven't seen too many jumping zombies on the show.  11)  They don't live in a mall or tall office building.  A mall would be perfect.  They have large security gates designed to keep even hundreds of people from breaking in and offer lots of supplies and food.  They're usually hundreds of thousands of square feet and fully enclosed, one could probably live their entire life happily in a mall.  Tall office building with on-site cafeteria would be another good choice.  They also usually offer good security and office furniture could be pushed out of the windows to crush approaching zombies, and the cafeteria is usually stocked to provide food for hundreds or thousands of office workers so food wouldn't be a problem for a long time. So there you have it, eleven things that don't make sense in The Walking Dead.  Have any of your own you'd like to add or were one of these things covered in the show?  Let me know in the comments.

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Agile Testing Days 2012 – My First Conference!

    - by Chris George
    I’d like to give you a bit of background first… so please bear with me! In 1996, whilst studying for my final year of my degree, I applied for a job as a C++ Developer at a small software house in Hertfordshire  After bodging up the technical part of the interview I didn’t get the job, but was offered a position as a QA Engineer instead. The role sounded intriguing and the pay was pretty good so in the absence of anything else I took it. Here began my career in the world of software testing! Back then, testing/QA was often an afterthought, something that was bolted on to the development process and very much a second class citizen. Test automation was rare, and tools were basic or non-existent! The internet was just starting to take off, and whilst there might have been testing communities and resources, we were certainly not exposed to any of them. After 8 years I moved to another small company, and again didn’t find myself exposed to any of the changes that were happening in the industry. It wasn’t until I joined Red Gate in 2008 that my view of testing and software development as a whole started to expand. But it took a further 4 years for my view of testing to be totally blown open, and so the story really begins… In May 2012 I was fortunate to land the role of Head of Test Engineering. Soon after, I received an email with details for the “Agile Testi However, in my new role, I decided that it was time to bite the bullet and at least go to one conference. Perhaps I could get some new ideas to supplement and support some of the ideas I already had.ng Days” conference in Potsdam, Germany. I looked over the suggested programme and some of the talks peeked my interest. For numerous reasons I’d shied away from attending conferences in the past, one of the main ones being that I didn’t see much benefit in attending loads of talks when I could just read about stuff like that on the internet. So, on the 18th November 2012, myself and three other Red Gaters boarded a plane at Heathrow bound for Potsdam, Germany to attend Agile Testing Days 2012. Tutorial Day – “Software Testing Reloaded” We chose to do the tutorials on the 19th, I chose the one titled “Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!”. With such a concise and serious title I just had to see what it was about! I nervously entered the room to be greeted by tables, chairs etc all over the place, not set out and frankly in one hell of a mess! There were a few people in there playing a game with dice. Okaaaay… this is going to be a long day! Actually the dice game was an exercise in deduction and simplification… I found it very interesting and is certainly something I’ll be using at work as a training exercise! (I won’t explain the game here cause I don’t want to let the cat out of the bag…) The tutorial consisted of several games, exploring different aspects of testing. They were all practical yet required a fair amount of thin king. Matt Heusser and Pete Walen were running the tutorial, and presented it in a very relaxed and light-hearted manner. It was really my first experience of working in small teams with testers from very different backgrounds, and it was really enjoyable. Matt & Pete were very approachable and offered advice where required whilst still making you work for the answers! One of the tasks was to devise several strategies for testing some electronic dice. The premise was that a Vegas casino wanted to use the dice to appeal to the twenty-somethings interested in tech, but needed assurance that they were as reliable and random as traditional dice. This was a very interesting and challenging exercise that forced us to challenge various assumptions, determine/clarify requirements but most of all it was frustrating because the dice made a very very irritating beeping noise. Multiple that by at least 12 dice and I was dreaming about them all that night!! Some of the main takeaways that were brilliantly demonstrated through the games were not to make assumptions, challenge requirements, and have fun testing! The tutorial lasted the whole day, but to be honest the day went very quickly! My introduction into the conference experience started very well indeed, and I would talk to both Matt and Pete several times during the 4 days. Days 1,2 & 3 will be coming soon…  

    Read the article

  • NetBeans, JSF, and MySQL Primary Keys using AUTO_INCREMENT

    - by MarkH
    I recently had the opportunity to spin up a small web application using JSF and MySQL. Having developed JSF apps with Oracle Database back-ends before and possessing some small familiarity with MySQL (sans JSF), I thought this would be a cakewalk. Things did go pretty smoothly...but there was one little "gotcha" that took more time than the few seconds it really warranted. The Problem Every DBMS has its own way of automatically generating primary keys, and each has its pros and cons. For the Oracle Database, you use a sequence and point your Java classes to it using annotations that look something like this: @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="POC_ID_SEQ") @SequenceGenerator(name="POC_ID_SEQ", sequenceName="POC_ID_SEQ", allocationSize=1) Between creating the actual sequence in the database and making sure you have your annotations right (watch those typos!), it seems a bit cumbersome. But it typically "just works", without fuss. Enter MySQL. Designating an integer-based field as PRIMARY KEY and using the keyword AUTO_INCREMENT makes the same task seem much simpler. And it is, mostly. But while NetBeans cranks out a superb "first cut" for a basic JSF CRUD app, there are a couple of small things you'll need to bring to the mix in order to be able to actually (C)reate records. The (RUD) performs fine out of the gate. The Solution Omitting all design considerations and activity (!), here is the basic sequence of events I followed to create, then resolve, the JSF/MySQL "Primary Key Perfect Storm": Fire up NetBeans. Create JSF project. Create Entity Classes from Database. Create JSF Pages from Entity Classes. Test run. Try to create record and hit error. It's a simple fix, but one that was fun to find in its completeness. :-) Even though you've told it what to do for a primary key, a MySQL table requires a gentle nudge to actually generate that new key value. Two things are needed to make the magic happen. First, you need to ensure the following annotation is in place in your Java entity classes: @GeneratedValue(strategy = GenerationType.IDENTITY) All well and good, but the real key is this: in your controller class(es), you'll have a create() function that looks something like this, minus the comment line and the setId() call in bold red type:     public String create() {         try {             // Assign 0 to ID for MySQL to properly auto_increment the primary key.             current.setId(0);             getFacade().create(current);             JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("CategoryCreated"));             return prepareCreate();         } catch (Exception e) {             JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));             return null;         }     } Setting the current object's primary key attribute to zero (0) prior to saving it tells MySQL to get the next available value and assign it to that record's key field. Short and simple…but not inherently obvious if you've never used that particular combination of NetBeans/JSF/MySQL before. Hope this helps! All the best, Mark

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • Are IE9 really good ?

    - by anirudha
    IE9 started a campaign for kill IE6 from the core because they know that IE6 is a big trouble or  problem for them for promote 9 version of IE. so they started a campaign for killing IE6. next time they kill IE 7 , 8,9 whenever they found this old version have a big problem for them to promote next version of IE.   Why they not make a update system who automatically update the browser and tell user to restart and update goes installed in the user system. well IE9 should learn from all other that they have very well design auto-update system who never give user in trouble that your browser goes old. Chrome and Firefox both update themselves and say user restart to enjoy another good version. in IE6 a big problem is that updates. no one sure that they installed new version of IE6 without any hassles and update goes install without any problem because they really know or care about “you need this to install this and this for this” so they thing “why I update IE whenever I am unsure that my browser goes update and I have no problem again” so they do nothing because their work done with no problem because common person used high profile application who work even in IE6. so they do nothing.    IE6 countdown website have designed a banner for warn or force user to upgrade to next version of IE. well there is no good reason for put the banner on website some of reason are:-   Windows 7 comes with pre-installed IE8 and Vista comes with upgrade version them IE6 so that is sure that you force a user who have Windows XP [luna] and if they want to upgrade IE then they can get IE8 not version 9 because IE9 is design for Windows 7 or Vista Service pack 2. so What is the use of update when user still have a outdate version too because IE8 is old version and not have any capability of HTML5 so forcing user by using the banner have no sense. I am not know why they all listed on website put the banner on their own website. it’s good that you offer user what they want instead of giving them a outdate version of IE again. My means to give a user list of browser they can try to enhance their browser experience instead of only IE.   IE9 build upon WPF and they spent more time on using WPF in IE instead of making user experience browser.  many thing is designed wrongly in IE first thing is tabs. the tabs in chrome are bigger and easily to move and same in Firefox even not have smooth tabbing. IE have same tabbing as chrome have but leak a point that it’s too small. if you really  want to move then sometime they create a problem that they going elsewhere from the current instance of IE.   Chrome have a big buttons, tabs and menu to enhance browser experience and Firefox have a good feature that you can make them bigger or small. you can put the icon for add-ons on the toolbar for easily use but IE have no relation with customization so we never can thinking about that.   When chrome provide lot’s of extensions and a  webstore for browser application and same feature in Firefox can be seen then there is no plugin in IE. really you can see their IE addons Website where no plugin listed for web development. even in the category or tag. as a response from many blog there is new for developer that new version of IE9 developer tool. well IE9 have three new tabs a blogger tell on their blog. when I trying them I found many thing but I still unable to edit the Css from the HTML tab and no plugin I found I can get to enhance IE9 web development. something more other provide never IE9 give me like personas , customization , browser extension or any other they used to tell a small thing customization  .   IE9 still have some problem with JavaScript that when I use Firefox and chrome and logout in both then my cookie is deleted but in IE it’s not done. it’s show me that IE9 still have different from other not for good thing even some bad thing too. When I trying to read a article that is written in Hindi using Unicode font I found that they show many thing misspelled. there is three Sha in Hindi but they all goes wrong in IE. the misprint thing is not that the writing  for the articles goes wrong. it’s problem or browser to rendering a font. the Firefox and chrome not give me this problem even opera render the font in italic style by decrease the font-size but all those work perfect.   in Pwn2Own the apple’s safari  and IE9 both are hacked. this is a awesome news for whose who thing that  open-source is lose in  Security and close-source is highly-secured software. well this is not a good parameter for talking about software. it’s should depend how much application tested and used. because more testing and more use of application make them better.   I  appreciate IE to making their new version 9 and good luck for them. there is a another matter that I personally found nothing on them.

    Read the article

  • Retrieving recent tweets using LINQ

    - by brian_ritchie
    There are a few different APIs for accessing Twitter from .NET.  In this example, I'll use linq2twitter.  Other APIs can be found on Twitter's development site. First off, we'll use the LINQ provider to pull in the recent tweets. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static Status[] GetLatestTweets(string screenName, int numTweets) 2: { 3: try 4: { 5: var twitterCtx = new LinqToTwitter.TwitterContext(); 6: var list = from tweet in twitterCtx.Status 7: where tweet.Type == StatusType.User && 8: tweet.ScreenName == screenName 9: orderby tweet.CreatedAt descending 10: select tweet; 11: // using Take() on array because it was failing against the provider 12: var recentTweets = list.ToArray().Take(numTweets).ToArray(); 13: return recentTweets; 14: } 15: catch 16: { 17: return new Status[0]; 18: } 19: } Once they have been retrieved, they would be placed inside an MVC model. Next, the tweets need to be formatted for display. I've defined an extension method to aid with date formatting: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static class DateTimeExtension 2: { 3: public static string ToAgo(this DateTime date2) 4: { 5: DateTime date1 = DateTime.Now; 6: if (DateTime.Compare(date1, date2) >= 0) 7: { 8: TimeSpan ts = date1.Subtract(date2); 9: if (ts.TotalDays >= 1) 10: return string.Format("{0} days", (int)ts.TotalDays); 11: else if (ts.Hours > 2) 12: return string.Format("{0} hours", ts.Hours); 13: else if (ts.Hours > 0) 14: return string.Format("{0} hours, {1} minutes", 15: ts.Hours, ts.Minutes); 16: else if (ts.Minutes > 5) 17: return string.Format("{0} minutes", ts.Minutes); 18: else if (ts.Minutes > 0) 19: return string.Format("{0} mintutes, {1} seconds", 20: ts.Minutes, ts.Seconds); 21: else 22: return string.Format("{0} seconds", ts.Seconds); 23: } 24: else 25: return "Not valid"; 26: } 27: } Finally, here is the piece of the view used to render the tweets. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <ul class="tweets"> 2: <% 3: foreach (var tweet in Model.Tweets) 4: { 5: %> 6: <li class="tweets"> 7: <span class="tweetTime"><%=tweet.CreatedAt.ToAgo() %> ago</span>: 8: <%=tweet.Text%> 9: </li> 10: <%} %> 11: </ul>  

    Read the article

  • South Florida Code Camp 2010 &ndash; VI &ndash; 2010-02-27

    - by Dave Noderer
    Catching up after our sixth code camp here in the Ft Lauderdale, FL area. Website at: http://www.fladotnet.com/codecamp. For the 5th time, DeVry University hosted the event which makes everything else really easy! Statistics from 2010 South Florida Code Camp: 848 registered (we use Microsoft Group Events) ~ 600 attended (516 took name badges) 64 speakers (including speaker idol) 72 sessions 12 parallel tracks Food 400 waters 600 sodas 900 cups of coffee (it was cold!) 200 pounds of ice 200 pizza's 10 large salad trays 900 mouse pads Photos on facebook Dave Noderer: http://www.facebook.com/home.php#!/album.php?aid=190812&id=693530361 Joe Healy: http://www.facebook.com/devfish?ref=mf#!/album.php?aid=202787&id=720054950 Will Strohl:http://www.facebook.com/home.php#!/album.php?aid=2045553&id=1046966128&ref=mf Veronica Gonzalez: http://www.facebook.com/home.php#!/album.php?aid=150954&id=672439484 Florida Speaker Idol One of the sessions at code camp was the South Florida Regional speaker idol competition. After user group level competitions there are five competitors. I acted as MC and score keeper while Ed Hill, Bob O’Connell, John Dunagan and Shervin Shakibi were judges. This statewide competition is being run by Roy Lawsen in Lakeland and the winner, Jeff Truman from Naples will move on to the state finals to be held at the Orlando Code Camp on 3/27/2010: http://www.orlandocodecamp.com/. Each speaker has 10 minutes. The participants were: Alex Koval Jeff Truman Jared Nielsen Chris Catto Venkat Narayanasamy They all did a great job and I’m working with each to make sure they don’t stop there and start speaking at meetings. Thanks to everyone involved! Volunteers As always events like this don’t happen without a lot of help! The key people were: Ed Hill, Bob O’Connell – DeVry For the months leading up to the event, Ed collects all of the swag, books, etc and stores them. He holds meeting with various DeVry departments to coordinate the day, he works with the students in the days  before code camp to stuff bags, print signs, arrange tables and visit BJ’s for our supplies (I go and pay but have a small car!). And of course the day of the event he is there at 5:30 am!! We took two SUV’s to BJ’s, i was really worried that the 36 cases of water were going to break his rear axle! He also helps with the students and works very hard before and after the event. Rainer Haberman – Speakers and Volunteer of the Year Rainer has helped over the past couple of years but this time he took full control of arranging the tracks. I did some preliminary work solicitation speakers but he took over all communications after that. We have tried various organizations around speakers, chair per track, central team but having someone paying attention to the details is definitely the way to go! This was the first year I did not have to jump in at the last minute and re-arrange everything. There were lots of kudo’s from the speakers too saying they felt it was more organized than they have experienced in the past from any code camp. Thanks Rainer! Ray Alamonte – Book Swap We saw the idea of a book swap from the Alabama Code Camp and thought we would give it a try. Ray jumped in and took control. The idea was to get people to bring their old technical books to swap or for others to buy. You got a ticket for each book you brought that you could then turn in to buy another book. If you did not have a ticket you could buy a book for $1. Net proceeds were $153 which I rounded up and donated to the Red Cross. There is plenty going on in Haiti and Chile! I don’t think we really got a count of how many books came in. I many cases the books barely hit the table before being picked up again. At the end we were left with a dozen books which we donated to the DeVry library. A great success we will definitely do again! Jace Weiss / Ratchelen Hut – Coffee and Snacks Wow, this was an eye opener. In past years a few of us would struggle to give some attention to coffee, snacks, etc. But it was always tenuous and always ended up running out of coffee. In the past we have tried buying Dunkin Donuts coffee, renting urns, borrowing urns, etc. This year I actually purchased 2 – 100 cup Westbend commercial brewers plus a couple of small urns (30 and 60 cup we used for decaf). We got them both started early (although i forgot to push the on button on one!) and primed it with 10 boxes of Joe from Dunkin. then Jace and Rachelen took over.. once a batch was brewed they would refill the boxes, keep the area clean and at one point were filling cups. We never ran out of coffee and served a few hundred more than last  year. We did look but next year I’ll get a large insulated (like gatorade) dispensing container. It all went very smoothly and having help focused on that one area was a big win. Thanks Jace and Rachelen! Ken & Shirley Golding / Roberta Barbosa – Registration Ken & Shirley showed up and took over registration. This year we printed small name tags for everyone registered which was great because it is much easier to remember someone’s name when they are labeled! In any case it went the smoothest it has ever gone. All three were actively pulling people through the registration, answering questions, directing them to bags and information very quickly. I did not see that there was too big a line at any time. Thanks!! Scott Katarincic / Vishal Shukla – Website For the 3rd?? year in a row, Scott was in charge of the website starting in August or September when I start on code camp. He handles all the requests, makes changes to the site and admin. I think two years ago he wrote all the backend administration and tunes it and the website a bit but things are pretty stable. The only thing I do is put up the sponsors. It is a big pressure off of me!! Thanks Scott! Vishal jumped into the web end this year and created a new Silverlight agenda page to replace the old ajax page. We will continue to enhance this but it is definitely a good step forward! Thanks! Alex Funkhouser – T-shirts/Mouse pads/tables/sponsors Alex helps in many areas. He helps me bring in sponsors and handles all the logistics for t-shirts, sponsor tables and this year the mouse pads. He is also a key person to help promote the event as well not to mention the after after party which I did not attend and don’t want to know much about! Students There were a number of student volunteers but don’t have all of their names. But thanks to them, they stuffed bags, patrolled pizza and helped with moving things around. Sponsors We had a bunch of great sponsors which allowed us to feed people and give a way a lot of great swag. Our major sponsors of DeVry, Microsoft (both DPE and UGSS), Infragistics, Telerik, SQL Share (End to End, SQL Saturdays), and Interclick are very much appreciated. The other sponsors Applied Innovations (also supply code camp hosting), Ultimate Software (a great local SW company), Linxter (reliable cloud messaging we are lucky to have here!), Mediascend (a media startup), SoftwareFX (another local SW company we are happy to have back participating in CC), CozyRoc (if you do SSIS, check them out), Arrow Design (local DNN and Silverlight experts),Boxes and Arrows (a local SW consulting company) and Robert Half. One thing we did this year besides a t-shirt was a mouse pad. I like it because it will be around for a long time on many desks. After much investigation and years of using mouse pad’s I’ve determined that the 1/8” fabric top is the best and that is what we got!   So now I get a break for a few months before starting again!

    Read the article

  • Building Great-Looking, Usable Apps: A two-day workshop applying Oracle’s best UX practices in ADF

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceI have been with Oracle for more than 12 years. It is a company that has granted me extraordinary creative freedom to help deliver compelling experiences for customers.I am beyond proud to talk about one of the experiences we just took for a test drive. Recently, we delivered a first-of-its-kind, three-team collaboration, train-the-trainer event in Reading, U.K., on building great-looking, usable apps based on Oracle Fusion Applications -- using the ADF tool kit. A new kind of workshopKevin Li, Platform Product Director, asked the Oracle Applications User Experience VP, Jeremy Ashley, if the team had anything to help partners and customers build applications that looked like Fusion. He was receiving this request from European partners and customers.Some quick conversations ensued, and the idea for the workshop was born: We would conduct an experiment.  We would work with feedback from the key Platform Technology Solutions (PTS) trainers under Andre Pavanello, Director, Platform Technology Solutions, in Europe, Middle East, and Africa. We would partner with the ADF team lead by Grant Ronald, Director of Product Management, title> and leverage the Applications UX expertise in Ashley’s team.The goal: Create a pilot workshop that in two days would explain to an ADF developer how to leverage the next-generation user experience best-practices developed for Fusion Apps. Why? Customers who need integrations with Oracle Fusion Applications, who are looking for custom applications that need to co-exist with Fusion, or who quite simply want a next-generation design for a custom app, need their solutions to reflect the next-generation research and design.Building an event for an ADF developerThe biggest hurdle was figuring out where to start.  How far into user experience country do you take an ADF developer? How far into ADF do you need to go if you are a UX professional?After some time in the UX kitchen, the workshop recipe looked like this: Mix equal parts: Fusion user experience design principles and functional design patterns The art and science behind UX How to wireframe designs that you can build in Fusion How to translate those designs into an ADF application Ultan O’Broin, Director of Global User Experience, explaining the trouble ticket wireframe design exerciseLynn Munsinger, Senior Group Product Manager, explaining the follow-on trouble ticket ADF coding exercise For spice, add:•    Debra Lilley, Fujitsu and ACE director, showcasing some of the latest ADF design work in the new face of Fusion Applications •    Partner show-and-tell of example apps they have built with FMW and ADF that are dynamic, beautiful, and interactive.Debra Lilley, Oracle ACE Director and Fujitsu Fusion Champion on the new face of Fusion built with ADF and Fusion extensibility with composers as a window into “the possible”?The taste testThis first go-round of the workshop was aimed squarely at ADF developers and partners.  We were privileged to have participation and feedback from:•    Sten Vesterli, Scott/Tiger S. A., Denmark•    John Sim, Fishbowl Solutions, UK•    Josef Huber, Primus Delphi Group, Munich•    Thaddaus Weindl, Primus Delphi, Group , Munich•    Praveen Pillalamarri, EiS Technologies, Bangalore•    Balaji Kamepalli, EiS Technologies, Bangalore•    Plinio Arbizu, Services & Processes Solutions S. A., Mexico•    Yannick Ongena, infoMENTUM, UK•    Jakub Ciszek, infoMENTUM, UK•    Mauro Flores, infoMENTUM, UK•    Matteo Formica, infoMENTUM, UKRichard Bingham, Oracle, Mauro Flores and Matteo Formica, infoMENTUMWhy is this so exciting?  Oracle has invested heavily in the research and development of the Oracle Fusion Applications user experience. This investment has been and continues to be applied across the product lines. Now, we finally get to teach customers and partners how to take advantage of this investment for custom solutions.This event was a pilot to test-drive the content, as well as a train-the-trainer event that our EMEA colleagues will be using with partners who want to build with Fusion Apps design patterns.What did attendees think?"I liked most the science stuff, like eye-tracking, design patterns and best-practice (color, contrast),” Josef Huber said. “It was a very good introduction to UI design, and most developers and project managers are very bad in that.  So this course would be good for all developers and even project managers." Team Anonymous: John Sim, Fishbowl Solutions, Flavius Sana, Oracle, Josef Huber, infoMENTUM, Mireille Duroussaud, Oracle. Winners of the wireframing design exercise.  Sten Vesterli, of Scott/Tiger, said he attended to learn techniques he could use in his own projects. He wants to ensure that his applications better meet the needs of his users, and he said sessions during the workshop on user interface design and wireframing were most useful to him.  “Go to this event to learn the art and science of good user interfaces from people who really know how to do it,” he said.Sten Vesterli, Scott/Tiger, Angelo Santagata, Oracle Plinio Arbizu said the workshop fulfilled his goals, thanks to the recommendations given in how to design user interfaces to facilitate the adoption of applications among the final users. “The workshop combined these recommendations with an exercise that improved the technical comprehension, permitting the usage of JDeveloper to set forth our solutions,” he said. He added: “The first session that I really enjoyed was the five Fusion design principles. It was incredible to discover how these simple principles were included in an inherit manner in Fusion Applications, and I had been using many of them applying only ADF components.  Another topic that I enjoyed a lot was the eight recommendations about the visual design of UIs. The issues that were raised in that lesson are unknown to the developers and of great value to achieve an attractive presentation layer to the end users.  Participate in this workshop, and include these usability features in your projects and in this manner not only to facilitate and improve the user productivity, but also to distinguish you as a professional who takes advantage fully of the functionalities offered by Oracle technology. Praveen Pillalamarri came to the workshop to learn about the difficulties faced in UI and UX development, and how this can be resolved with the help of ADF.  He also appreciated the opportunity to talk with other individuals who came to the workshop. Pillalmarri said, “The way we looked at things in terms of work and projects were sharpened.  UI and UX design knowledge shared by you was quite interesting, especially the minute things which we ignored in the UI or UX design.” Plinio Arbizu, Services & Processes Solutions S. A., Richard Bingham, Oracle, Balaji Kamepalli, & Praveen Pillalamarri, EiS TechnologiesReady to spread the wordIn EMEA, Oracle customers and partners have access to three world-class trainers via Platform Technology Solutions: Mireille Duroussaud, Flavius Sana, and Angelo Santagata. Contact Andre Pavanello if you like to experience this workshop firsthand, or you have customers or partners who would benefit from the training.We are looking to bring the event to the U.S. in spring 2013. If you have interest in this kind of a workshop, leave a comment below. For those who want to follow the action, join the ADF Enterprise Methodology Group run by Oracle’s Chris Muir. Ask questions and continue with the conversation in this forum, or check blogs.oracle.com/usableapps for topics emerging from the workshop.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • DualLayout for SharePoint 2010 WCM Quick Start

    - by svdoever
    DualLayout for SharePoint 2010 WCM is a solution to provide you with complete HTML freedom in your SharePoint Server 2010 publishing pages. In this post I provide a quick start guide to get you up and running quickly so you can try it out for yourself. This quick start creates a simple HTML5 site with a page to show-case the basics and the power of DualLayout. We will create the site in its own web application. Normally there are many things you have to do to create a clean start point for your SharePoint 2010 WCM site. All those steps will be provided in later posts. For now we want to give you the minimal set of steps to take to get DualLayout working on your machine. Create an authenticated web application with hostheader cms.html5demo.local on port 80 for the cms side of the site. Click the Create Site Collection link on the Application Created dialog box and create a Site Collection based on the Publishing Portal site template. Before we can click the site link in the Top-Level Site Successfully Created dialog we need to add the new host header cms.html5demo.local to the hosts file. Add the following line to the hosts file: 127.0.0.1        cms.html5demo.local Navigate to the site at http://cms.html5demo.local to see the out-of-the-box example Adventure Works publishing site. Download and add the DualLayout solution package designfactory.duallayout.sps2010.trial.1.2.0.0.wsp to the farm’s solution store: On the Start menu, click All Programs. Click Microsoft SharePoint 2010 Products. Click SharePoint 2010 Management Shell. At the Windows PowerShell command prompt, type the following command:Add-SPSolution -LiteralPath designfactory.duallayout.sps2010.trial.1.2.0.0.wsp In SharePoint 2010 Central Administration deploy the solution to the web application http://cms.html5demo.local. Navigate to the site at http://cms.html5demo.local, and in the Site Settings screen select Site Collection Administration > Site collection features and activate the following feature: Open the site http://cms.html5demo.local in SharePoint Designer 2010. Create a view-mode masterpage html5simple.master with the following code: html5simple.master <%@ Master language="C#" %> <%@ Register Tagprefix="SharePointWebControls" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register TagPrefix="sdl" Namespace="DesignFactory.DualLayout" Assembly="DesignFactory.DualLayout, Version=1.2.0.0, Culture=neutral, PublicKeyToken=077f92bbf864a536" %>   <!DOCTYPE html> <html class="no-js">       <head>         <meta charset="utf-8" />         <meta http-equiv="X-UA-Compatible" content="IE=Edge" />         <title><SharePointWebControls:FieldValue FieldName="Title" runat="server"/></title>           <script type="text/javascript">             document.createElement('header');             document.createElement('nav');             document.createElement('article');             document.createElement('hgroup');             document.createElement('aside');             document.createElement('section');             document.createElement('footer');             document.createElement('figure');             document.createElement('time');         </script>           <asp:ContentPlaceHolder id="PlaceHolderAdditionalPageHead" runat="server"/>     </head>          <body>                  <header>             <div class="logo">Logo</div>             <h1>SiteTitle</h1>             <nav>                 <a href="#">SiteMenu 1</a>                 <a href="#">SiteMenu 2</a>                 <a href="#">SiteMenu 3</a>                 <a href="#">SiteMenu 4</a>                 <a href="#">SiteMenu 5</a>                 <sdl:SwitchToWcmModeLinkButton runat="server" Text="…"/>             </nav>             <div class="tagline">Tagline</div>             <form>                 <label>Zoek</label>                 <input type="text" placeholder="Voer een zoekterm in...">                 <button>Zoek</button>                             </form>           </header>                  <div class="content">             <div class="pageContent">                 <asp:ContentPlaceHolder id="PlaceHolderMain" runat="server" />             </div>         </div>              <footer>             <nav>                 <ul>                     <li><a href="#">FooterMenu 1</a></li>                     <li><a href="#">FooterMenu 2</a></li>                     <li><a href="#">FooterMenu 3</a></li>                     <li><a href="#">FooterMenu 4</a></li>                     <li><a href="#">FooterMenu 5</a></li>                 </ul>             </nav>             <small>Copyright &copy; 2011 Macaw</small>         </footer>     </body> </html> Note that if no specific WCM-mode master page is specified (html5simple-wcm.master), the default v4.master master page will be used in WCM-mode. Create a WCM-mode page layout html5simplePage-wcm.aspx with the following code: html5simplePage-wcm.aspx <%@ Page language="C#"     Inherits="DesignFactory.DualLayout.WcmModeLayoutPage, DesignFactory.DualLayout, Version=1.2.0.0, Culture=neutral, PublicKeyToken=077f92bbf864a536" %> <%@ Register Tagprefix="SharePointWebControls"              Namespace="Microsoft.SharePoint.WebControls"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="WebPartPages"              Namespace="Microsoft.SharePoint.WebPartPages"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingWebControls"              Namespace="Microsoft.SharePoint.Publishing.WebControls"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <asp:Content ContentPlaceholderID="PlaceHolderPageTitle" runat="server">     <SharePointWebControls:FieldValue id="PageTitle" FieldName="Title" runat="server"/> </asp:Content> <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server"> </asp:Content> Notice the Inherits at line two. Instead of inheriting from Microsoft.SharePoint.Publishing.PublishingLayoutPage we need to inherit from DesignFactory.DualLayout.WcmModeLayoutPage. Create a view-mode page layout html5simplePage.aspx with the following code: html5simplePage.aspx html5simplePage.aspx <%@ Page language="C#"          Inherits="DesignFactory.DualLayout.ViewModeLayoutPage, DesignFactory.DualLayout,                     Version=1.2.0.0, Culture=neutral, PublicKeyToken=077f92bbf864a536" %> <%@ Register Tagprefix="SharePointWebControls"              Namespace="Microsoft.SharePoint.WebControls"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="WebPartPages"              Namespace="Microsoft.SharePoint.WebPartPages"              Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingWebControls"              Namespace="Microsoft.SharePoint.Publishing.WebControls"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation"              Assembly="Microsoft.SharePoint.Publishing, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <asp:Content ContentPlaceholderID="PlaceHolderAdditionalPageHead" runat="server" /> <asp:Content ContentPlaceholderID="PlaceHolderMain" runat="server">     The title of the page is: <SharePointWebControls:FieldValue id="PageTitleInContent" FieldName="Title" runat="server"/> </asp:Content> Notice the Inherits at line two. Instead of inheriting from Microsoft.SharePoint.Publishing.PublishingLayoutPage we need to inherit from DesignFactory.DualLayout.ViewModeLayoutPage. Set the html5simple.master master page as the Site Master Page Set the allowed page layouts to the Html5 Simple Page page layout and set the New Page Default Settings also to Html5 Simple Page so new created pages are also of this page layout. Note that the Html5 Simple Page page layout is initially not selectable for New Page Default Settings. Save this configuration page first after selecting the allowed page layouts, then open again and select the default new page. Under Site Actions select the New Page action. Create a page Home.aspx of the default page layout type Html5 Simple Page. Set the new created Home.aspx page as Welcome Page. Navigate to the site http://csm.html5demo.local and see the home page in the WCM display and edit mode. Select Switch to View Mode under Site Actions to see the resulting page in view-mode. Select the three dots (…) at the right side of the menu to switch back to WCM-mode. Have a look at the source view of the resulting web page and admire the clean HTML. No SharePoint specific markup or CSS files! Clean HTML in page <!DOCTYPE html> <html class="no-js">     <head>         <meta charset="utf-8" />         <meta http-equiv="X-UA-Compatible" content="IE=Edge" />         <title>Home</title>         <script type="text/javascript">             document.createElement('header');             document.createElement('nav');             document.createElement('article');             document.createElement('hgroup');             document.createElement('aside');             document.createElement('section');             document.createElement('footer');             document.createElement('figure');             document.createElement('time');         </script>              </head>          <body>                  <header>             <div class="logo">Logo</div>             <h1>SiteTitle</h1>             <nav>                 <a href="#">SiteMenu 1</a>                 <a href="#">SiteMenu 2</a>                 <a href="#">SiteMenu 3</a>                 <a href="#">SiteMenu 4</a>                 <a href="#">SiteMenu 5</a>                 <a href="/Pages/Home.aspx?DualLayout_ShowInWcmMode=true">…</a>             </nav>             <div class="tagline">Tagline</div>             <form>                 <label>Zoek</label>                 <input type="text" placeholder="Voer een zoekterm in...">                 <button>Zoek</button>                             </form>         </header>                  <div class="content">             <div class="pageContent">                      The title of the page is: Home             </div>         </div>              <footer>             <nav>                 <ul>                     <li><a href="#">FooterMenu 1</a></li>                     <li><a href="#">FooterMenu 2</a></li>                     <li><a href="#">FooterMenu 3</a></li>                     <li><a href="#">FooterMenu 4</a></li>                     <li><a href="#">FooterMenu 5</a></li>                 </ul>             </nav>             <small>Copyright &copy; 2011 Macaw</small>         </footer>     </body> </html> <!-- Macaw DesignFactory DualLayout for SharePoint 2010 Trial version --> Note the link at line 37, this link will only be rendered for authenticated users and is our way to switch back to WCM-mode. This concludes our quick start to get DualLayout up an running in a matter of minutes. And what is the result: You can have the full SharePoint 2010 WCM publishing page editing experience to manage the content in your pages. You don’t have to delve into large SharePoint specific master pages and page layouts with a lot of knowledge of the does and don'ts with respect to SharePoint controls, scripts and stylesheets. The end-user gets a clean and light HTML page. Get your fully functional, non-timebombed trial copy of DualLayout and start creating!

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >