Search Results

Search found 19970 results on 799 pages for 'jason down'.

Page 163/799 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • In Case You Weren’t There: Blogwell NYC

    - by Mike Stiles
    0 0 1 1009 5755 Vitrue 47 13 6751 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman";} Your roving reporter roved out to another one of Socialmedia.org’s fantastic Blogwell events, this time in NYC. As Central Park and incredible weather beckoned, some of the biggest brand names in the world gathered to talk about how they’re incorporating social into marketing and CRM, as well as extending social across their entire organizations internally. Below we present a collection of the live tweets from many of the key sessions GE @generalelectricJon Lombardo, Leader of Social Media COE How GE builds and extends emotional connections with consumers around health and reaps the benefits of increased brand equity in the process. GE has a social platform around Healthyimagination to create better health for people. If you and a friend are trying to get healthy together, you’ll do better. Health is inherently. Get health challenges via Facebook and share with friends to achieve goals together. They’re creating an emotional connection around the health context. You don’t influence people at large. Your sphere of real influence is around 5-10 people. They find relevant conversations about health on Twitter and engage sounding like a friend, not a brand. Why would people share on behalf of a brand? Because you tapped into an activity and emotion they’re already having. To create better habits in health, GE gave away inexpensive, relevant gifts related to their goals. Create the context, give the relevant gift, get social acknowledgment for giving it. What you get when you get acknowledgment for your engagement and gift is user generated microcontent. GE got 12,000 unique users engaged and 1400 organic posts with the healthy gift campaign. The Dow Chemical Company @DowChemicalAbby Klanecky, Director of Digital & Social Media Learn how Dow Chemical is finding, training, and empowering their scientists to be their storytellers in social media. There are 1m jobs coming open in science. Only 200k are qualified for them. Dow Chemical wanted to use social to attract and talk to scientists. Dow Chemical decided to use real scientists as their storytellers. Scientists are incredibly passionate, the key ingredient of a great storyteller. Step 1 was getting scientists to focus on a few platforms, blog, Twitter, LinkedIn. Dow Chemical social flow is Core Digital Team - #CMs – ambassadors – advocates. The scientists were trained in social etiquette via practice scenarios. It’s not just about sales. It’s about growing influence and the business. Dow Chemical trained about 100 scientists, 55 are active and there’s a waiting list for the next sessions. In person social training produced faster results and better participation. Sometimes you have to tell pieces of the story instead of selling your execs on the whole vision. Social Media Ethics Briefing: Staying Out of TroubleAndy Sernovitz, CEO @SocialMediaOrg How do we get people to share our message for us? We have to have their trust. The difference between being honest and being sleazy is disclosure. Disclosure does not hurt the effectiveness of your marketing. No one will get mad if you tell them up front you’re a paid spokesperson for a company. It’s a legal requirement by the FTC, it’s the law, to disclose if you’re being paid for an endorsement. Require disclosure and truthfulness in all your social media outreach. Don’t lie to people. Monitor the conversation and correct misstatements. Create social media policies and training programs. If you want to stay safe, never pay cash for social media. Money changes everything. As soon as you pay, it’s not social media, it’s advertising. Disclosure, to the feds, means clear, conspicuous, and understandable to the average reader. This phrase will keep you in the clear, “I work for ___ and this is my personal opinion.” Who are you? Were you paid? Are you giving an honest opinion based on a real experience? You as a brand are responsible for what an agency or employee or contactor does in your behalf. SocialMedia.org makes available a Disclosure Best Practices Toolkit. Socialmedia.org/disclosure. The point is to not ethically mess up and taint social media as happened to e-mail. Not only is the FTC cracking down, so is Google and Facebook. Visa @VisaNewsLucas Mast, Senior Business Leader, Global Corporate Social Media Visa built a mobile studio for the Olympics for execs and athletes. They wanted to do postcard style real time coverage of Visa’s Olympics sponsorships, and on a shoestring. Challenges included Olympic rules, difficulty getting interviews, time zone trouble, and resourcing. Another problem was they got bogged down with their own internal approval processes. Despite all the restrictions, they created and published a variety of and fair amount of content. They amassed 1000+ views of videos posted to the Visa Communication YouTube channel. Less corporate content yields more interest from media outlets and bloggers. They did real world video demos of how their products work in the field vs. an exec doing a demo in a studio. Don’t make exec interview videos dull and corporate. Keep answers short, shoot it in an interesting place, do takes until they’re comfortable and natural. Not everything will work. Not everything will get a retweet. But like the lottery, you can’t win if you don’t play. Promoting content is as important as creating it. McGraw-Hill Companies @McGrawHillCosPatrick Durando, Senior Director of Global New Media McGraw-Hill has 26,000 employees. McGraw-Hill created a social intranet called Buzz. Intranets create operational efficiency, help product dev, facilitate crowdsourcing, and breaks down geo silos. Intranets help with talent development, acquisition, retention. They replaced the corporate directory with their own version of LinkedIn. The company intranet has really cut down on the use of email. Long email threats become organized, permanent social discussions. The intranet is particularly useful in HR for researching and getting answers surrounding benefits and policies. Using a profile on your company intranet can establish and promote your internal professional brand. If you’re going to make an intranet, it has to look great, work great, and employees are going have to want to go there. You can’t order them to like it. 

    Read the article

  • ArchBeat Link-o-Rama for 11/14/2011

    - by Bob Rhubart
    InfoQ: Developer-Driven Threat Modeling Threat modeling is critical for assessing and mitigating the security risks in software systems. In this IEEE article, author Danny Dhillon discusses a developer-driven threat modeling approach to identify threats using the dataflow diagrams. Managing the Virtual World | Philip J. Gill "The killer app for virtualization has been server consolidation," says Al Gillen, program vice president for systems software at market research firm International Data Corporation (IDC). Solaris X86 AESNI OpenSSL Engine | Dan Anderson "Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor," says Anderson. WebLogic Access Management | René van Wijk "This post is a continuation of the post WebLogic Identity Management. In this post we will present the steps involved to integrate WebLogic and Oracle Access Manager," says Oracle ACE René van Wijk. OTN Developer Days in the Nordics - Helsinki, Oslo, Stockholm, and Copenhagen OTN Developer days head for the land of the midnight sun. Podcast: Information Integration Part 2/3 In part two of a three-part program, Oracle Information Integration, Migration, and Consolidation authors Jason Williamson, Tom Laszewsk, and Marc Hebert offer examples of some of the most daunting information integration challenges. Measuring the Human Task activity in Oracle BPM | Leon Smiers Leon Smiers discusses using Oracle BPM to get answer to important questions about what's happening with business process. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ- Dec 14 Spend the day with your peers learning from experts in Cloud computing, engineered systems, and Oracle Fusion Middleware. The Heroes of Java: Michael Hüttermann | Markus Eisele Oracle ACE Director Markus Eisele interviews Java Champion Michael Hüttermann on his role, his process, and on why he uses Java.

    Read the article

  • ASP.NET MVC 3 Hosting :: MVC 2 Strongly Typed HTML Helper and Enhanced Validation Sample

    - by mbridge
    In lue of the off the official release of ASP.NET MVC 2 RTM, I decided I would put together a quick sample of the enhanced HTML.Helpers and validation controls. I am going to use my sample event site where I will have a form so a user can search for information about a certain events. So when the Search page loads the Search action is fired return my strongly typed model. to the view.    1: [HttpGet]    2: public ViewResult Search(): public ViewResult Search()    3: {    4:     IList<EventsModel> result = _eventsService.GetEventList();    5:     var viewModel = new EventSearchModel    6:                         {    7:                             EventList = new SelectList(result, "EventCode","EventName","Select Event")    8:                         };    9:     return View(viewModel);  10: } Nothing special here, although I did want to show how to load up a strongly typed drop down list because that hung me up for a little bit. So to that, I am going to pass back a SelectList to the view and my HTML helper should no how to load this. So lets take a look at the mark up for the view.    1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"    2: Inherits="System.Web.Mvc.ViewPage<EventsSample.Models.EventSearchModel>" %>    3:     4: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">    5:     Search    6: </asp:Content>    7:     8: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">    9:   10:     <h2>Search for Events</h2>  11:   12:     <% using (Html.BeginForm("Search","Events")) {%>  13:         <%= Html.ValidationSummary(true) %>  14:          15:         <fieldset>  16:             <legend>Fields</legend>  17:              18:             <div class="editor-label">  19:                 <%= Html.LabelFor(model => model.EventNumber) %>  20:             </div>  21:             <div class="editor-field">  22:                 <%= Html.TextBoxFor(model => model.EventNumber) %>  23:                 <%= Html.ValidationMessageFor(model => model.EventNumber) %>  24:             </div>  25:              26:             <div class="editor-label">  27:                 <%= Html.LabelFor(model => model.GuestLastName) %>  28:             </div>  29:             <div class="editor-field">  30:                 <%= Html.TextBoxFor(model => model.GuestLastName) %>  31:                 <%= Html.ValidationMessageFor(model => model.GuestLastName) %>  32:             </div>  33:              34:             <div class="editor-label">  35:                 <%= Html.LabelFor(model => model.EventName) %>  36:             </div>  37:             <div class="editor-field">  38:                 <%= Html.DropDownListFor(model => model.EventName, Model.EventList,"Select Event") %>  39:                 <%= Html.ValidationMessageFor(model => model.EventName) %>  40:             </div>  41:              42:             <p>  43:                 <input type="submit" value="Save" />  44:             </p>  45:         </fieldset>  46:   47:     <% } %>  48:   49:     <div>  50:         <%= Html.ActionLink("Back to List", "Index") %>  51:     </div>  52:   53: </asp:Content> A nice feature is the scaffolding that MVC has to generate code. I simply right clicked inside my Search() action, inside the EventsController and selected “Add View” and then I selected my strongly typed object that I wanted to pass to the view and also selected that I wanted the content type be “Edit”. With that the aspx page was completely generated, although I did have to go back in and change the textbox for the Event Names to a drop down list of the names to select from. The new feature with MVC 2 are the strongly typed HTML helpers. So now, my textboxes, drop down list, and validation helpers are all strongly typed to my model.  This features gives you the benefits of intellisense and also makes it easier to debug. “The Gu” has a great post about the feature in case you want more details. The DropDownListFor function to generate the drop down list was a little tricky for me. You first need to use a Lanbda expression to pass in the property you want the selected value assigned to in your model, and then you need to pass in the list directly from the model. Validations To validate the form, you can use the strongly type validation HTML helpers which will inspect your model and return errors if the validation fails. The definitions of these rules are set directly on the Model itself so lets take a look.    1: using System.ComponentModel.DataAnnotations;    2: using System.Web.Mvc;    3:     4: namespace EventsSample.Models    5: {    6:     public class EventSearchModel    7:     {    8:         [Required(ErrorMessage = "Please enter the event number.")]    9:         [RegularExpression(@"\w{6}",  10:             ErrorMessage = "The Event Number must be 6 letters and/or numbers.")]  11:         public string EventNumber { get; set; }  12:   13:         [Required(ErrorMessage = "Please enter the guest's last name.")]  14:         [RegularExpression(@"^[A-Za-zÀ-ÖØ-öø-ÿ1-9 '\-\.]{1,22}$",  15:             ErrorMessage = "The gueest's last name must 1 to 20 characters.")]  16:         public string GuestLastName { get; set; }  17:   18:         public string EventName { get; set; }  19:         public SelectList EventList { get; set; }  20:     }  21: } Pretty cool! Okay, the only thing left to do is perform the validation in the POST action.    1: [HttpPost]    2: public ViewResult Search(EventSearchModel eventSearchModel)    3: {    4:     if (ModelState.IsValid) return View("SearchResults");    5:     else    6:     {    7:          IList<EventsModel> result = _eventsService.GetEventList();    8:         eventSearchModel.EventList = new SelectList(result, "EVentCode","EventName");   9:   10:         return View(eventSearchModel);  11:     }  12: }  13:     } If the form entries are valid, here I am simply displaying the SearchResult, but in a real world sample I would also go out get the results first. You get the idea though. In my case, when the form is not valid, I also had to reload my SelectList with the event names before I loaded the page again. Remember this is MVC, no _VieState here :) So that’s it. Now my form is validating the data and when it fails it looks like this.

    Read the article

  • How to Use the Signature Editor in Outlook 2013

    - by Lori Kaufman
    The Signature Editor in Outlook 2013 allows you to create a custom signature from text, graphics, or business cards. We will show you how to use the various features of the Signature Editor to customize your signatures. To open the Signature Editor, click the File tab and select Options on the left side of the Account Information screen. Then, click Mail on the left side of the Options dialog box and click the Signatures button. For more details, refer to one of the articles mentioned above. Changing the font for your signature is pretty self-explanatory. Select the text for which you want to change the font and select the desired font from the drop-down list. You can also set the justification (left, center, right) for each line of text separately. The drop-down list that reads Automatic by default allows you to change the color of the selected text. Click OK to accept your changes and close the Signatures and Stationery dialog box. To see your signature in an email, click Mail on the Navigation Bar. Click New Email on the Home tab. The Message window displays and your default signature is inserted into the body of the email. NOTE: You shouldn’t use fonts that are not common in your signatures. In order for the recipient to see your signature as you intended, the font you choose also needs to be installed on the recipient’s computer. If the font is not installed, the recipient would see a different font, the wrong characters, or even placeholder characters, which are empty square boxes. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can save it as a draft if you want, but it’s not necessary. If you decide to use a font that is not common, a better way to do so would be to create a signature as an image, or logo. Create your image or logo in an image editing program making it the exact size you want to use in your signature. Save the image in a file size as small as possible. The .jpg format works well for pictures, the .png format works well for detailed graphics, and the .gif format works well for simple graphics. The .gif format generally produces the smallest files. To insert an image in your signature, open the Signatures and Stationery dialog box again. Either delete the text currently in the editor, if any, or create a new signature. Then, click the image button on the editor’s toolbar. On the Insert Picture dialog box, navigate to the location of your image, select the file, and click Insert. If you want to insert an image from the web, you must enter the full URL for the image in the File name edit box (instead of the local image filename). For example, http://www.somedomain.com/images/signaturepic.gif. If you want to link to the image at the specified URL, you must also select Link to File from the Insert drop-down list to maintain the URL reference. The image is inserted into the Edit signature box. Click OK to accept your changes and close the Signatures and Stationery dialog box. Create a new email message again. You’ll notice the image you inserted into the signature displays in the body of the message. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You may want to put a link to a webpage or an email link in your signature. To do this, open the Signatures and Stationery dialog box again. Enter the text to display for the link, highlight the text, and click the Hyperlink button on the editor’s toolbar. On the Insert Hyperlink dialog box, select the type of link from the list on the left and enter the webpage, email, or other type of address in the Address edit box. You can change the text that will display in the signature for the link in the Text to display edit box. Click OK to accept your changes and close the dialog box. The link displays in the editor with the default blue, underlined text. Click OK to accept your changes and close the Signatures and Stationery dialog box. Here’s an example of an email message with a link in the signature. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your contact information into your signature as a Business Card. To do so, click Business Card on the editor’s toolbar. On the Insert Business Card dialog box, select the contact you want to insert as a Business Card. Select a size for the Business Card image from the Size drop-down list. Click OK. The Business Card image displays in the Signature Editor. Click OK to accept your changes and close the Signatures and Stationery dialog box. When you insert a Business Card into your signature, the Business Card image displays in the body of the email message and a .vcf file containing your contact information is attached to the email. This .vcf file can be imported into programs like Outlook that support this format. Close the Message window using the File tab or the X button in the upper, right corner of the Message window. You can also insert your Business Card into your signature without the image or without the .vcf file attached. If you want to provide recipients your contact info in a .vcf file, but don’t want to attach it to every email, you can upload the .vcf file to a location on the internet and add a link to the file, such as “Get my vCard,” in your signature. NOTE: If you want to edit your business card, such as applying a different template to it, you must select a different View other than People for your Contacts folder so you can open the full contact editing window.     

    Read the article

  • Pet Peeves with the Windows Phone 7 Marketplace

    - by Bil Simser
    Have you ever noticed how something things just gnaw at your very being. This is the case with the WP7 marketplace, the Zune software, and the things that drive me batshit crazy with a side of fries. To go. I wanted to share. XBox Live is Not the Centre of the Universe Okay, it’s fine that the Zune software has an XBox live tag for games so can see them clearly but do we really need to have it shoved down our throats. On every click? Click on Games in the marketplace: The first thing that it defaults to on the filters on the right is XBox Live: Okay. Fine. However if you change it (say to Paid) then click onto a title when you come back from that title is the filter still set to Paid? No. It’s back to XBox Live again. Really? Give us a break. If you change to any filter on any other genre then click on the selected title, it doesn’t revert back to anything. It stays on the selection you picked. Let’s be fair here. The Games genre should behave just like every other one. If I pick Paid then when I come back to the list please remember that. Double Dipping On the subject of XBox Live titles, Microsoft (and developers who have an agreement with Microsoft to produce Live titles, which generally rules out indie game developers) is double dipping with regards to exposure of their titles. Here’s the Puzzle and Trivia Game section on the Marketplace for XBox Live titles: And here’s the same category filtered on Paid titles: See the problem? Two indie titles while the rest are XBox Live ones. So while XBL has it’s filter, they also get to showcase their wares in the Paid and Free filters as well. If you’re going to have an XBox Live filter then use it and stop pushing down indie titles until they’re off the screen (on some genres this is already the case). Free and Paid titles should be just that and not include XBox Live ones. If you’re really stoked that people can’t find the Free XBox Live titles vs. the paid ones, then create a Free XBox Live filter and a Paid XBox Live filter. I don’t think we would mind much. Whose Trial is it Anyways? You might notice apps in the marketplace with titles like “My Fart App Professional Lite” or “Silicon Lamb Spleen Builder Free”. When you submit and app to the marketplace it can either be free or paid. If it’s a paid app you also have the option to submit it with Trial capabilities. It’s up to you to decide what you offer in the trial version but trial versions can be purchased from within the app so after someone trys out your app (for free) and wants to unlock the Super Secret Obama Spy Ring Level, they can just go to the marketplace from your app (if you built that functionality in) and upgrade to the paid version. However it creates a rift of sorts when it comes to visibility. Some developers go the route of the paid app with a trial version, others decide to submit *two* apps instead of one. One app is the “Free” or “Lite” verions and the other is the paid version. Why go to the hassle of submitting two apps when you can just create a trial version in the same app? Again, visibility. There’s no way to tell Paid apps with Trial versions and ones without (it’s an option, you don’t have to provide trial versions, although I think it’s a good idea). However there is a way to see the Free apps from the Paid ones so some submit the two apps and have the Free version have links to buy the paid one (again through the Marketplace tasks in the API). What we as developers need for visibility is a new filter. Trial. That’s it. It would simply filter on Paid apps that have trial capabilities and surface up those apps just like the free ones. If Microsoft added this filter to the marketplace, it would eliminate the need for people to submit their “Free” and “Lite” versions and make it easier for the developer not to have to maintain two systems. I mean, is it really that hard? Can’t be any more difficult than the XBox Live Filter that’s already there. Location is Everything The last thing on my bucket list is about location. When I launch Zune I’m running in my native location setting, Canada. What’s great is that I navigate to the Travel Tools section where I have one of my apps and behold the splendour that I see: There are my apps in the number 1 and number 4 slot for top selling in that category. I show it to my wife to make up for the sleepless nights writing this stuff and we dance around and celebrate. Then I change my location on my operation system to United States and re-launch Zune. WTF? My flight app has slipped to the 10th spot (I’m only showing 4 across here out of the 7 in Zune) and my border check app that was #1 is now in the 32nd spot! End of celebration. Not only is relevance being looked at here, I value the comments people make on may apps as do most developers. I want to respond to them and show them that I’m listening. The next version of my border app will provide multiple camera angles. However when I’m running in my native Canada location, I only see two reviews. Changing over to United States I see fourteen! While there are tools out there to provide with you a unified view, I shouldn’t have to rely on them. My own Zune desktop software should allow me to see everything. I realize that some developers will submit an app and only target it for some locations and that’s their choice. However I shouldn’t have to jump through hoops to see what apps are ahead of mine, or see people comments and ratings. Another proposal. Either unify the marketplace (i.e. when I’m looking at it show me everything combined) or let me choose a filter. I think the first option might be difficult as you’re trying to average out top selling apps across all markets and have to deal with some apps that have been omitted from some markets. Although I think you could come up with a set of use cases that would handle that, maybe that’s too much work. At the very least, let us developers view the markets in a drop down or something from within the Zune desktop. Having to shut down Zune, change our location, and re-launch Zune to see other perspectives is just too onerous. A Call to Action These are just one mans opinion. Do you agree? Disagree? Feel hungry for a bacon sandwich? Let everyone know via the comments below. Perhaps someone from Microsoft will be reading and take some of these ideas under advisement. Maybe not, but at least let’s get the word out that we really want to see some change. Egypt can do it, why not WP7 developers!

    Read the article

  • Get Smarter Just By Listening

    - by mark.wilcox
    Occasionally my friends ask me what do I listen/read to keep informed. So I thought I would like to post an update. First - there is an entirely new network being launched by Jason Calacanis called "ThisWeekIn". They have weekly shows on variety of topics including Startups, Android, Twitter, Cloud Computing, Venture Capital and now the iPad. If you want to keep ahead (and really get motivated) - I totally recommend listening to at least This Week in Startups. I also find Cloud Computing helpful. I also like listening to the Android show so that I can see how it's progressing. Because while I love my iPhone/iPad - it's  important to keep the competition in the game up to improve everything. I'm also not opposed to switching to Android if something becomes as nice experience - but so far - my take on Android devices are  - 10 years ago, I would have jumped all over them because of their hackability. But now, I'm in a phase, where I just want these devices to work and most of my creation is in non-programming areas - I find the i* experience better. Second - In terms of general entertaining tech news - I'm a big fan of This Week in Tech. Finally - For a non-geek but very informative show - The Kevin Pollack Show on ThisWeekIn network gets my highest rating. It's basically two-hours of in-depth interview with a wide variety of well-known comedian and movie stars. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • Reflector Pro has now been released!

    - by CliveT
    After moving into the .NET division in May , and having a great time working on Reflector, I'm pleased to say that the results of that work are now available. Reflector Pro has now been released! The old Reflector as you know and love it is still available free of charge, and as part of this project we've fixed a number of bugs in the de-compilation that have been around for a long time. The Pro version comes as an add-in for Visual Studio - this offers dynamic de-compilation and generation of pdb files which allow you to step into the de-compiled code. Alex has some good pictures of this functionality on his beta post from around a month ago. Thanks to the other guys who've worked on this for taking me along for the ride - Alex, Andrew, Bart and Jason. Stephen did some great usability work, Chris Alford did some great technical authoring and Laila handled the launch publicity. Like all projects, there's always more I'd like to have done, but what we have looks like a pretty powerful addition to the developer's set of tools to me. Please try it and give us feedback on the forum.

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • ArchBeat Link-o-Rama for 11/18/2011

    - by Bob Rhubart
    IT executives taking lead role with both private and public cloud projects: survey | Joe McKendrick "The survey, conducted among members of the Independent Oracle Users Group, found that both private and public cloud adoption are up—30% of respondents report having limited-to-large-scale private clouds, up from 24% only a year ago. Another 25% are either piloting or considering private cloud projects. Public cloud services are also being adopted for their enterprises by more than one out of five respondents." - Joe McKendrick SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. OIM 11g OID (LDAP) Groups Request-Based Provisioning with custom approval – Part I | Alex Lopez Iin part one of a two-part blog post, Alex Lopez illustrates "an implementation of a Custom Approval process and a Custom UI based on ADF to request entitlements for users which in turn will be converted to Group memberships in OID." ArchBeat Podcast Information Integration - Part 3/3 "Oracle Information Integration, Migration, and Consolidation" author Jason Williamson, co-author Tom Laszeski, and book contributor Marc Hebert talk about upcoming projects and about what they've learned in writing their book. InfoQ: Enterprise Shared Services and the Cloud | Ganesh Prasad As an industry, we have converged onto a standard three-layered service model (IaaS, PaaS, SaaS) to describe cloud computing, with each layer defined in terms of the operational control capabilities it offers. This is unlike enterprise shared services, which have unique characteristics around ownership, funding and operations, and they span SaaS and PaaS layers. Ganesh Prasad explores the differences. Stress Testing Oracle ADF BC Applications - Do Connection Pooling and TXN Disconnect Level Oracle ACE Director Andrejus Baranovskis describes "how jbo.doconnectionpooling = true and jbo.txn.disconnect_level = 1 properties affect ADF application performance." Exploring TCP throughput with DTrace | Alan Maguire "According to the theory," says Maguire, "when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput."

    Read the article

  • Visual Studio Continued Excitement

    - by Daniel Moth
    As you know Visual Studio 2012 RTM’d and then became available in August (Soma’s blog posts told you that here and here), and the VS2012 launch was earlier this month (Soma also told you that here). Every time a release of Visual Studio takes place I am very excited, since this has been my development tool of choice for almost my entire career (from many years before I joined Microsoft). I am doubly excited with this release since it is the second version of Visual Studio that I have worked on and contributed major features to, now that I’ve been in the developer division (DevDiv) for over 4 years. Additionally, I am very excited about the new era that VS2012 starts with VSUpdate for continued customer value: instead of waiting for the next major version of VS to get new features, there is new infrastructure to enable friction-free updates. The first update will ship before the end of this year, and you can read more about it at Brian’s blog post. I also noticed that a CTP of the first quarterly update is available to download here. In the last two months, the VS2012 family of products we all worked on in DevDiv shipped, coinciding with the end of the Microsoft financial/review year, and naturally followed by a couple of organizational changes (e.g. see Jason’s blog post)… On a personal level, this meant that I was very lucky to have an opportunity present itself to me that I simply could not turn down, so I grabbed it! I’ll still be working on Visual Studio, but not in the Parallel Computing part of the C++ team; instead I will be PM-leading the VS Diagnostics team… Stay tuned for details of what is coming in that space, in the VS updates and in the next major VS release, as I am able to share them… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • JavaOne 2012 Conference Preview

    - by Janice J. Heiss
    A new article, by noted freelancer Steve Meloan, now up on otn/java, titled “JavaOne 2012 Conference Preview,” looks ahead to the fast approaching JavaOne 2012 Conference, scheduled for September 30-October 4 in San Francisco. The Conference will celebrate and highlight one of the world’s leading technologies. As Meloan states, “With 9 million Java developers worldwide, 5 billion Java cards in use, 3 billion mobile phones running Java, 1 billion Java downloads each year, and 100 percent of Blu-ray disk players and 97 percent of enterprise desktops running Java, Java is a technology that literally permeates our world.”The 2012 JavaOne is organized under seven technical tracks:* Core Java Platform* Development Tools and Techniques* Emerging Languages on the JVM* Enterprise Service Architectures and the Cloud* Java EE Web Profile and Platform Technologies* Java ME, Java Card, Embedded, and Devices* JavaFX and Rich User ExperiencesConference keynotes will lay out the Java roadmap. For the Sunday keynote, such Oracle luminaries as Cameron Purdy, Vice President of Development; Nandini Ramani, Vice President of Engineering, Java Client and Mobile Platforms; Richard Bair, Chief Architect, Client Java Platform; and Mark Reinhold, Chief Architect, Java Platform will be presenting.For the Thursday IBM keynote, Jason McGee, Distinguished Engineer and Chief Architect for IBM PureApplication System, and John Duimovich, Java CTO and IBM Distinguished Engineer, will explore Java and IBM's cloud-based initiatives.All in all, the JavaOne 2012 Conference should be as exciting as ever.Link to the article here. Originally published on blogs.oracle.com/javaone.

    Read the article

  • JavaOne 2012 Conference Preview

    - by Janice J. Heiss
    A new article, by noted freelancer Steve Meloan, now up on otn/java, titled “JavaOne 2012 Conference Preview,” looks ahead to the fast approaching JavaOne 2012 Conference, scheduled for September 30-October 4 in San Francisco. The Conference will celebrate and highlight one of the world’s leading technologies. As Meloan states, “With 9 million Java developers worldwide, 5 billion Java cards in use, 3 billion mobile phones running Java, 1 billion Java downloads each year, and 100 percent of Blu-ray disk players and 97 percent of enterprise desktops running Java, Java is a technology that literally permeates our world.” The 2012 JavaOne is organized under seven technical tracks: * Core Java Platform* Development Tools and Techniques* Emerging Languages on the JVM* Enterprise Service Architectures and the Cloud* Java EE Web Profile and Platform Technologies* Java ME, Java Card, Embedded, and Devices* JavaFX and Rich User Experiences Conference keynotes will lay out the Java roadmap. For the Sunday keynote, such Oracle luminaries as Cameron Purdy, Vice President of Development; Nandini Ramani, Vice President of Engineering, Java Client and Mobile Platforms; Richard Bair, Chief Architect, Client Java Platform; and Mark Reinhold, Chief Architect, Java Platform will be presenting. For the Thursday IBM keynote, Jason McGee, Distinguished Engineer and Chief Architect for IBM PureApplication System, and John Duimovich, Java CTO and IBM Distinguished Engineer, will explore Java and IBM's cloud-based initiatives.All in all, the JavaOne 2012 Conference should be as exciting as ever. Link to the article here.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • The best way to learn how to extend Orchard

    - by Bertrand Le Roy
    We do have tutorials on the Orchard site, but we can't cover all topics, and recently I've found myself more and more responding to forum questions by pointing people to an existing module that was solving a similar problem to the one the question was about. I really like this way of learning by example and from the expertise of others. This is one of the reasons why we decided that modules would by default come in source code form that we compile dynamically. it makes them easy to understand and easier to modify for your own purposes. Hackability FTW! But how do you crack open a module and look at what's inside? You can do it in two different ways. First, you can just install the module from the gallery, directly from your Orchard instance's admin panel. Once you've done that, you can just look into your Modules directory under the web site. There is now a subfolder with the name of the new module that contains a csproj that you can open in Visual Studio or add to your Orchard solution. Second, you can simply download the package (it's NuGet) and rename it to a .zip extension. NuGet being based on Zip, this will open just fine in Windows Explorer: What you want to dig into is the Content/Modules/[NameOfTheModule] folder, which is where the actual code is. Thanks to Jason Gaylord for the idea for this post.

    Read the article

  • Choosing open source vs. proprietary CMS

    - by jkneip
    Hi- I've been tasked with redesigning a website for a small academic library. While only in charge of the site for 6 months, we've been maintaining static html pages edited in Dreamweaver for years. Last count of our total pages is around 400. Our university is going with an enterprise level solution called Sitefinity, although we maintain our own domain and are responsible to maintain our own presense. Some background-my library has a couple Microsoft IIS servers on which this static html site has been running. I'm advocating for the implementation of a CMS while doing this redesign. The problem is I'm basically the lone webmaster so I have no one to agree or disagree with my choice. There are also only 1-2 content editors right know for the site but a CMS could change that factor. I would like to use the functionality of having servers that run .NET and MS SQL but am more experience setting up and maintaining open source software like Wordpress or Drupal on web hosts. My main concern is choosing a CMS that will be easy to update / maintain / deal with upgrades (i.e., support) in case I'm not there in the future. So I'm wondering how to factor in the open source CMS vs. a relatively inexpensive commercial CMS decision and whether choosing PHP/MySQL vs. ASP.net framework for development environment will play into my decision. Thanks for any input that can be offered based on the details I've given. Thanks, Jason

    Read the article

  • Google+ Platform Office Hours for April 25, 2012: Q&A with the Hangouts API Team

    Google+ Platform Office Hours for April 25, 2012: Q&A with the Hangouts API Team This week we were joined by Richard Dunn of the Hangouts API team who answered questions about the Hangouts API. Discuss this video on Google+: goo.gl 1:09 - What's going on with the Hangouts API? 3:43 - Jason shares information about his current projects 5:40 - Can I prevent a Hangout app from running within a Hangout On Air? 8:05 - Can we have APIs to control On Air features? 10:05 - Could a Silverlight / JavaScript bridge be created so we can use them in Hangout Apps? 12:01 - Is there a way to obfuscate the code for a Hangouts app? 15:24 - Are there plans to consolidate the various comment and chat channels for Hangouts On Air? 18:53 - When will Hangouts On Air come to Android? 20:48 - How can I access the OAuth token from the API? - developers.google.com 22:39 - When will we have Hangout apps on the mobile devices? 24:57 - Is it possible to search for 2 or more hash tags via the search REST API? 25:45 - Will we see a PHP REST API demo today? 26:20 - How can I restrict usage of a Hangout app? 30:07 - How do you hold a hangout that is simulcast on YouTube? 31:07 - Why do users show up as empty objects before they've authorized the app? 32:52 - What are the best practice for storing user specific configuration? 38:06 - Is anyone doing in application payment? 39:22 - Has anyone written any books about Hangout apps? From: GoogleDevelopers Views: 1619 19 ratings Time: 42:04 More in Science & Technology

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Installer Reboots at "Detecting hardware" (disks and other hardware) on all recent Server Installs

    - by Ryan Rosario
    I have a very frustrating problem with my PC. I cannot install any recent version of Ubuntu Server (or even Desktop) since 9.04 even using the text-based installer. I boot from a USB stick created by Unetbootin (I also tried other methods such as startup disk creator with no difference). On the Server installer, it gets to "Detecting Hardware" (the second one about disks and all other hardware, not network hardware) and then either hangs at 0% (waited 24 hours), or reboots after a minute or two. My system (late 2007): ASUS P5NSLI motherboard Intel Core 2 Duo E6600 2.4Ghz 2 x 1GB Corsair 667MHz RAM nVidia GeForce 6600 I have unplugged everything (including the only hard disk, CD-ROMs and floppy). I have only one stick of RAM (tried each one to no avail) and am booting the installer from a USB stick (booting from CD-ROM yields the same problem). I also tried several of the boot options (nomodeset, nousb, acpi=off, noapic, i915.modeset=1/0, xforcevesa) in all combinations) to no avail. The only active parts of my system are the video card, mouse, keyboard and USB stick. I have also updated the BIOS to the most recent version. (FWIW, on the Desktop installer, I get a black screen after hitting the Install option.) Even after removing "quiet" I am unable to see what kernel panic is occurring (or not occurring) to cause the install to crash. I am only able to save the debug logs via a simple webserver in the installer. After the last line (I repeatedly refreshed), the server stops responding and the installer hangs or reboots: Jan 2 01:04:03 main-menu[302]: INFO: Menu item 'disk-detect' selected Jan 2 01:04:04 kernel: [ 309.154372] sata_nv 0000:00:0e.0: version 3.5 Jan 2 01:04:04 kernel: [ 309.154409] sata_nv 0000:00:0e.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.154531] sata_nv 0000:00:0e.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.164442] scsi0 : sata_nv Jan 2 01:04:04 kernel: [ 309.167610] scsi1 : sata_nv Jan 2 01:04:04 kernel: [ 309.167762] ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xd400 irq 10 Jan 2 01:04:04 kernel: [ 309.167774] ata2: SATA max UDMA/133 cmd 0x970 ctl 0xb70 bmdma 0xd408 irq 10 Jan 2 01:04:04 kernel: [ 309.167948] sata_nv 0000:00:0f.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.168071] sata_nv 0000:00:0f.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.171931] scsi2 : sata_nv Jan 2 01:04:04 kernel: [ 309.173793] scsi3 : sata_nv Jan 2 01:04:04 kernel: [ 309.173943] ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe800 irq 11 Jan 2 01:04:04 kernel: [ 309.173954] ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe808 irq 11 Jan 2 01:04:04 kernel: [ 309.174061] pata_amd 0000:00:0d.0: version 0.4.1 Jan 2 01:04:04 kernel: [ 309.174160] pata_amd 0000:00:0d.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.177045] scsi4 : pata_amd Jan 2 01:04:04 kernel: [ 309.178628] scsi5 : pata_amd Jan 2 01:04:04 kernel: [ 309.178801] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf000 irq 14 Jan 2 01:04:04 kernel: [ 309.178811] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf008 irq 15 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface lo Jan 2 01:04:04 kernel: [ 309.485062] ata3: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:04 kernel: [ 309.633094] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 2 01:04:04 kernel: [ 309.641647] ata1.00: ATA-8: ST31000528AS, CC38, max UDMA/133 Jan 2 01:04:04 kernel: [ 309.641658] ata1.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32) Jan 2 01:04:04 kernel: [ 309.657614] ata1.00: configured for UDMA/133 Jan 2 01:04:04 kernel: [ 309.657969] scsi 0:0:0:0: Direct-Access ATA ST31000528AS CC38 PQ: 0 ANSI: 5 Jan 2 01:04:04 kernel: [ 309.658482] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 2 01:04:04 kernel: [ 309.658588] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jan 2 01:04:04 kernel: [ 309.658812] sd 0:0:0:0: [sda] Write Protect is off Jan 2 01:04:04 kernel: [ 309.658823] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 2 01:04:04 kernel: [ 309.658918] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 2 01:04:04 kernel: [ 309.675630] sda: sda1 sda2 Jan 2 01:04:04 kernel: [ 309.676440] sd 0:0:0:0: [sda] Attached SCSI disk Jan 2 01:04:05 kernel: [ 309.969102] ata2: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:05 kernel: [ 310.281137] ata4: SATA link down (SStatus 0 SControl 300) Anybody have any additional ideas I could try? I am getting ready to just toss the motherboard.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >