Search Results

Search found 15228 results on 610 pages for 'comment action'.

Page 545/610 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Communities - The importance of exchange and discussion

    Communication with your environment is an essential part of everyone's life. And it doesn't matter whether you are actually living in a rural area in the middle of nowhere, within the pulsating heart of a big city, or in my case on a wonderful island in the Indian Ocean. The ability to exchange your thoughts, your experience and your worries with another person helps you to get different points of view and new ideas on how to resolve an issue you might be confronted with. Benefits of community work What happens to be common sense in your daily life, also applies to your work environment. Working in IT, or ICT as it is called in Mauritius, requires a lot of reading and learning. Not only during your lectures at the university but with your colleagues in a project assignment and hopefully with 'unknown' pals in the universe of online communities. At least I can say that I learned quite a lot from other developers code, their responses in various forums, their numerous blog articles, and while attending local user group meetings. When I started to work as a professional software developer (or engineer some may say) years ago I immediately checked the existence of communities on the programming language, the database technology and other vital information on software development in general. Luckily, it wasn't too difficult to find. My employer had a subscription of the monthly magazines and newsletters of a national organisation which also run the biggest forum in that area. Getting in touch with other developers and reading their common problems but also solutions was a huge benefit to my growth. Image courtesy of Michael Kappel (CC BY-NC 2.0) Active participation and regular contribution to this community gave me some nice advantages, too. Within three years I was listed as a conference speaker at the annual developer's conference and provided several sessions on different topics during consecutive years. Back in 2004, I took over the responsibility and management of the monthly meetings of a regional user group, and organised it for more than two years. Furthermore, I was invited to the newly-founded community program of Microsoft Germany (Community Leader/Insider Program - CLIP). My website on Active FoxPro Pages was nominated in the second batch of online communities. Due to my community work and providing advice to others, I had the honour to be awarded as Microsoft Most Valuable Professional (MVP) - Visual Developer for Visual FoxPro in the years 2006 and 2007. It was a great experience to meet with other like-minded people and I'm really grateful for that. Just in case, more details are listed in my Curriculum Vitae. But this all changed when I moved to Mauritius... Cyber island Mauritius? During the first months in Mauritius I was way too busy to think about community activities at all. First of all, there was the new company that had to be set up, the new staff had to be trained and of course the communication work-flows and so on with the project managers back in Germany had to be sorted out, too. Second, I had to get a grip of my private matters like getting the basics for my new household or exploring the neighbourhood, and last but not least I needed a break from the hectic and intensive work prior to my departure. As soon as the sea literally calmed down, I started to have conversations with my colleagues about communities and user groups. Sadly, it turned out that there were none, or at least no one was aware of any at that time. Oh oh, what did I do? Anyway, having this kind of background and very positive experience with off-line and on-line activities I decided for myself that some day I'm going to found a community in Mauritius for all kind of IT/ICT-related fields. The main focus might be on software development but not on a certain technology or methodology. It was clear to me that it should be an open infrastructure and anyone is welcome to join, to experience, to share and to contribute if they would like to. That was the idea at that time... Ok, fast-forward to recent events. At the end of October 2012 I was invited to an event called Open Days organised by Microsoft Indian Ocean Islands together with other local partners and resellers. There I got in touch with local Technical Evangelist Arnaud Meslier and we had a good conversation on communities during the breaks. Eventually, I left a good impression on him, as we are having chats on Facebook or Skype irregularly. Well, seeing that my personal and professional surroundings have been settled and running smooth, having that great exchange and contact with Microsoft IOI (again), and being really eager to re-animate my intentions from 2007, I recently founded a new community: Mauritius Software Craftsmanship Community - #MSCC It took me a while to settle down with the name but it was obvious that the community should not be attached to one single technology, like ie. .NET user group, Oracle developers, or Joomla friends (these are fictitious names). There are several other reasons why I came up with 'Craftsmanship' as the core topic of this community. The expression of 'engineering' didn't feel right with the fields covered. Software development in all kind of facets is a craft, and therefore demands a lot of practice but also guidance from more experienced developers. It also includes the process of designing, modelling and drafting the ideas. Has to deal with various types of tests and test methodologies, and of course should be focused on flexible and agile ways of acting. In order to meet and to excel a customer's request for a solution. Next, I was looking for an easy way to handle the organisation of events and meeting appointments. Using all kind of social media platforms like Google+, LinkedIn, Facebook, Xing, etc. I was never really confident about their features of event handling. More by chance I stumbled upon Meetup.com and in combination with the other entities (G+ Communities, FB Pages or in Groups) I am looking forward to advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there will be a Trello workspace to collect and share ideas and provide downloads of slides, etc. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians!

    Read the article

  • Upgrading Agent Controllers in Oracle Enterprise Manager Ops Center 12c

    - by S Stelting
    Oracle Enterprise Manager Ops Center 12c recently released an upgrade for Solaris Agent Controllers. In this week's blog post, we'll show you how to upgrade agent controllers. Detailed instructions about upgrading Agent Controllers are available in the product documentation here. This blog post uses an Enterprise Controller which is configured for connected mode operation. If you'd like to apply the agent update in a disconnected installation, additional instructions are available here. Step 1: Download Agent Controller Updates With a connected mode Ops Center installation, you can check for product updates at any time by selecting the Enterprise Controller from the left-hand Administration navigation tab. Select the right-hand Action link “Ops Center Downloads” to open a pop-up dialog displaying any new product updates. In this example, the Enterprise Controller has already been upgraded to the latest version (Update 1, also shown as build version 2076) so only the Agent Controller updates will appear. There are three updates available: one for Solaris 10 X86, one for Solaris 8-10 SPARC, and one for all versions of Solaris 11. Note that the last update in the screen shot is the Solaris 11 update; for details on any of the downloads, place your mouse over the information icon under the details column for a pop-up text region. Select the software to download and click the Next button to display the Ops Center license agreement. Review and click the check box to accept the license agreement, then click the Next button to begin downloading the software. The status screen shows the current download status. If desired, you can perform the downloads as a background job. Simply click the check box, then click the next button to proceed to the summary screen. The summary screen shows the updates to be downloaded as well as the current status. Clicking the Finish button will close the dialog and return to the Browser UI. The download job will continue to run in Ops Center and progress can still be viewed from the jobs menu at the bottom of the browser window. Step 2: Check the Version of Existing Agent Controllers After the download job completes, you can check the availability of agent updates as well as the current versions of your Agent Controllers from the left-hand Assets navigation tab. Select “Operating Systems” from the pull-down tab lets to display only OS assets. Next, select “Solaris” in the left-hand tab to display the Solaris assets. Finally, select the Summary tab in the center display panel to show which versions of agent controllers are installed in your data center. Notice that a few of the OS assets are not displayed in the Agent Controllers tab. Ops Center will not display OS instances which do not have an Agent Controller installation. This includes Enterprise Controllers and Proxy Controllers (unless the agent has been activated on the OS instance) and and OS instances using agentless management. For Agent Controllers which support an update, the version of agent software (in this example, 2083) appears to the right of the currently installed version. Step 3: Upgrade Your Agent Controllers If desired, you can upgrade agent controllers from the previous screen by selecting the desired systems and clicking the upgrade button. Alternatively, you can click the link “Upgrade All Agent Controllers” in the right-hand Actions menu: In either case, a pop-up dialog lets you start the upgrade process. The first screen in the dialog lets you choose the upgrade method: Ops Center provides three ways to upgrade agent controllers: Automatic Upgrade: If Agent Controllers are running on all assets, Ops Center can automatically upgrade the software to the latest version without requiring any login credentials to the system SSH using a single set of credentials: If all assets use the same login credentials, you can apply a single set to all assets for the upgrade process. The log-in credentials are the same ones used for asset discovery and management, which are stored in the Plan Management navigation tab under Credentials. SSH using individual credentials: If assets use different login credentials, you can select a different set for each asset. After selecting the upgrade method, click the Next button to proceed to the summary screen. Click the Finish button to close the pop-up dialog and start the upgrade job for the agent controllers. The upgrade job runs a series of tasks in parallel, and will upgrade all agents which have been selected. Once the job completes, the OS instances in your data center will be upgraded and running the latest version of Agent Controller software.

    Read the article

  • Mocking property sets

    - by mehfuzh
    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo {     int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Here , the method for running though our expectation for set is Mock.ArrangeSet , where we can directly set our expectations or can even set matchers into it like: var foo = Mock.Create<IFoo>(BehaviorMode.Strict);   Mock.ArrangeSet(() => foo.Value = Arg.Matches<int>(x => x > 3));   foo.Value = 4; foo.Value = 5;   Assert.Throws<MockException>(() => foo.Value = 3);   In the example, any set for value not satisfying matcher expression will throw an MockException as this is a strict mock but what will be the case for loose mocks, where we also have to assert it. Here, let’s take an interface with an indexed property. Indexers are treated in the same way as properties, as with basic indexers let you access your class if it were an array. public interface IFooIndexed {     string this[int key] { get; set; } } We want to  setup a value for a particular index,  we then will pass that mock to some implementer where it will be actually called. Once done, we want to assert that if it has been invoked properly. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = "ping");   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = "ping"); In the above example, both the values are user defined, it might happen that we want to make it more dynamic, In this example, i set it up for set with any value and finally checked if it is set with the one i am looking for. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = Arg.Any<string>());   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = Arg.Matches<string>(x => string.Compare("ping", x) == 0)); This is more or less of mocking user sets , but we can further have it to throw exception or even do our own task for a particular set , like : Mock.ArrangeSet(() => foo.MyProperty = 10).Throws(new ArgumentException()); Or  bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Or call the original setter , in this example it will throw an NotImplementedExpectation var foo = Mock.Create<FooAbstract>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).CallOriginal(); Assert.Throws<NotImplementedException>(() => { foo.Value = 1; });   Finally, try all these, find issues, post them to forum and make it work for you :-). Hope that helps,

    Read the article

  • Moving from Tortoise to TFS

    - by MarkPearl
    The Past A few years ago my small software company made the jump from storing code on a shared folder to source code control. At the time we had evaluated a few of the options and settled on Tortoise SVN. The main motivation for going the SVN route was that we found a great plugin for Visual Studio that allowed us to avoid the command prompt for uploading changes (like I said we are windows programmers… command prompt bad!! ) and it was free. Up to now we have been pretty happy with SVN as it removed many of the worries that I had about how safe my code was on a shared folder and also gave us the opportunity to safely have several developers work on the same project at the same time. The only times when we have been unhappy has been when we have had SVN hell days – which pretty much occur when you are doing something out of the norm and suddenly SVN just won’t resolve conflicts or something along those lines. This happens once every 4 or 5 months and is not necessarily a problem caused directly by SVN – but a problem augmented by SVN. When you have SVN hell days you want to curse SVN! With that in mind I recently have been relooking at our source code control. I have explored using GIT and was very impressed by it and have also looked at TFS. From a source code control perspective I don’t want to get into a heated discussion on which one is better – but I do want to mention that I wear two hats in my organization – software developer & manager, and with the manager hat on I tend to sway the TFS route. So when I was given a coupon to test DiscountASP.Net Team Foundation Server Service for a year, I thought it was the perfect opportunity to try TFS in a distributed environment and also make the first step towards having an integrated development management system. Some of the things that appeal to me about DiscountASP’s offering are the following… Basic management / planning facilities like to do lists inside Visual Studio Daily backup of data on the server – we are developers, not IT managers and so the more of this I could outsource the better Distributed solution – all of us work remotely and so this was a big one as well. Registering and Setting Up with DiscountASP.NET The whole registration process was simple and intuitive. The web interface is not the most visually impressive one, but it is functional and a few seconds after I clicked the last submit button a email was sitting in my inbox giving me my control panel username and suggesting that I read the “Getting Started” article. The getting started article was easy to read and understand so no complaints there either. Next to set my dev environment to work. With a few references to the getting started article I had completed the whole setup process in a matter of minutes. Ten minutes after initiating the whole thing I was logged into VS2010 and creating my first TFS project. With the service that I signed up for, I have access for 5 users – which is sufficient for my internal needs. So from what I can tell, to set the rest of us up on the system I just need to supply them with their user credentials and url. My Concerns Resolved 1) Security So, a few concerns I had about the service. First and foremost – is it secure? I would hate for someone to get access to our code and the whole idea of putting it up on the internet is a concern for me. Turning to the Knowledge Base on the DiscountASP website this is one of the first question I can see answered. According to them it is secure. I have extracted their comment below regarding this. Our TFS hosting service is secure. We only accept HTTPS connections ensuring that any client-server data transmission is encrypted. At the network level, all of our systems are protected by multiple Juniper firewalls, Tipping Point's Intrusion Detection System (see Tipping Point's case study of our use here), and we also employ DDoS mitigation to add extra layers of security. Additionally, physical access to the servers is tightly restricted. Please see the security section of this Knowledge Base article for further details. 2) Web Portal Access The other big concern I have is regarding web portal access. In the ideal world I would like to be able to give my end users access to a web portal for reporting bugs etc. When I initially read through the FAQ of the site it mentioned that there was web portal access – but from what I can see this is just for “users”. Since I am limited to 5 users for the account, it would not be practical to set up external users that we could get feedback from on bugs etc. I would be interested if this is possible – and if so if someone could post it in the comments it would be much appreciated. If this isn’t possible, it is a slight let down as we rely heavily on end user feedback to get feedback and it would have been ideal to have gotten this within the service. Other than those two items, I didn’t have any real concerns that were unresolved. So where do I go from here? So time passed by from the initial writing of this post and as work whirred in and out of my inbox I have still not had a proper opportunity to give the service a test run. Recently though things have began to slow down and then surprise surprise I had another SVN Hell day. With that experience I had a new found resolve to get our team on TFS and so today we are going to start to use the service as a team. I am hoping that I do not have TFS hell days – but if I do, I will be sure to write about them. In short - the verdict is still out on whether this service is going to be invaluable to my business or whether it will create more headaches than it is worth BUT I am hopping it will be an invaluable service. I will only really be able to determine that in a few months… till then!

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • obj-c classes and sub classes (Cocos2d) conversion

    - by Lewis
    Hi I'm using this version of cocos2d: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which supports the UIGestureRecognizer within a CCLayer in a cocos2d scene like so: @interface HelloWorldLayer : CCLayer <UIGestureRecognizerDelegate> { } Now I want to make this custom gesture work within the scene, attaching it to a sprite in cocos2d: #import <Foundation/Foundation.h> #import <UIKit/UIGestureRecognizerSubclass.h> @protocol OneFingerRotationGestureRecognizerDelegate <NSObject> @optional - (void) rotation: (CGFloat) angle; - (void) finalAngle: (CGFloat) angle; @end @interface OneFingerRotationGestureRecognizer : UIGestureRecognizer { CGPoint midPoint; CGFloat innerRadius; CGFloat outerRadius; CGFloat cumulatedAngle; id <OneFingerRotationGestureRecognizerDelegate> target; } - (id) initWithMidPoint: (CGPoint) midPoint innerRadius: (CGFloat) innerRadius outerRadius: (CGFloat) outerRadius target: (id) target; - (void)reset; - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event; @end #include <math.h> #import "OneFingerRotationGestureRecognizer.h" @implementation OneFingerRotationGestureRecognizer // private helper functions CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2); CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB); - (id) initWithMidPoint: (CGPoint) _midPoint innerRadius: (CGFloat) _innerRadius outerRadius: (CGFloat) _outerRadius target: (id <OneFingerRotationGestureRecognizerDelegate>) _target { if ((self = [super initWithTarget: _target action: nil])) { midPoint = _midPoint; innerRadius = _innerRadius; outerRadius = _outerRadius; target = _target; } return self; } /** Calculates the distance between point1 and point 2. */ CGFloat distanceBetweenPoints(CGPoint point1, CGPoint point2) { CGFloat dx = point1.x - point2.x; CGFloat dy = point1.y - point2.y; return sqrt(dx*dx + dy*dy); } CGFloat angleBetweenLinesInDegrees(CGPoint beginLineA, CGPoint endLineA, CGPoint beginLineB, CGPoint endLineB) { CGFloat a = endLineA.x - beginLineA.x; CGFloat b = endLineA.y - beginLineA.y; CGFloat c = endLineB.x - beginLineB.x; CGFloat d = endLineB.y - beginLineB.y; CGFloat atanA = atan2(a, b); CGFloat atanB = atan2(c, d); // convert radiants to degrees return (atanA - atanB) * 180 / M_PI; } #pragma mark - UIGestureRecognizer implementation - (void)reset { [super reset]; cumulatedAngle = 0; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; if ([touches count] != 1) { self.state = UIGestureRecognizerStateFailed; return; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesMoved:touches withEvent:event]; if (self.state == UIGestureRecognizerStateFailed) return; CGPoint nowPoint = [[touches anyObject] locationInView: self.view]; CGPoint prevPoint = [[touches anyObject] previousLocationInView: self.view]; // make sure the new point is within the area CGFloat distance = distanceBetweenPoints(midPoint, nowPoint); if ( innerRadius <= distance && distance <= outerRadius) { // calculate rotation angle between two points CGFloat angle = angleBetweenLinesInDegrees(midPoint, prevPoint, midPoint, nowPoint); // fix value, if the 12 o'clock position is between prevPoint and nowPoint if (angle > 180) { angle -= 360; } else if (angle < -180) { angle += 360; } // sum up single steps cumulatedAngle += angle; // call delegate if ([target respondsToSelector: @selector(rotation:)]) { [target rotation:angle]; } } else { // finger moved outside the area self.state = UIGestureRecognizerStateFailed; } } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesEnded:touches withEvent:event]; if (self.state == UIGestureRecognizerStatePossible) { self.state = UIGestureRecognizerStateRecognized; if ([target respondsToSelector: @selector(finalAngle:)]) { [target finalAngle:cumulatedAngle]; } } else { self.state = UIGestureRecognizerStateFailed; } cumulatedAngle = 0; } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesCancelled:touches withEvent:event]; self.state = UIGestureRecognizerStateFailed; cumulatedAngle = 0; } @end Header file for view controller: #import "OneFingerRotationGestureRecognizer.h" @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> @property (nonatomic, strong) IBOutlet UIImageView *image; @property (nonatomic, strong) IBOutlet UITextField *textDisplay; @end then this is in the .m file: gestureRecognizer = [[OneFingerRotationGestureRecognizer alloc] initWithMidPoint: midPoint innerRadius: outRadius / 3 outerRadius: outRadius target: self]; [self.view addGestureRecognizer: gestureRecognizer]; Now my question is, is it possible to add this custom gesture into the cocos2d project found on that github, and if so, what do I need to change in the OneFingerRotationGestureRecognizerDelegate to get it to work within cocos2d. Because at the minute it is setup in a standard iOS project and not a cocos2d project and I do not know enough about UIViews and classing/ sub classing in obj-c to get this to work. Also it seems to inherit from a UIView where cocos2d uses CCLayer. Kind regards, Lewis. I also realise I may have not included enough code from the custom gesture project for readers to interpret it fully, so the full project can be found here: https://github.com/melle/OneFingerRotationGestureDemo

    Read the article

  • Get your TFS 2012 task board demo ready in under 1 minute

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas In this blog post, I’ll show you how to use the ‘TfsDemoSetup’ application to configure and setup the TFS 2012 task board for a demo in well less than 1 minute Step 1 – Note what you get with a newly created Team Project Create a new Team Project on TFS Preview         2. Click Create Project         3. The project creation has completed        4. Open the team web access and have a look at the home page Note – Since I created the project I am the only Team Member       A default Team by the name AdventureWorks Team has been created       A few sprints have been assigned to the default team but no dates for sprint start and end have been specified        A default Area Path for the team is missing       Step 2. Download the TFS Demo Setup Console application from Codeplex 1. Navigate to the TFS Demo Setup project on codeplex https://tfsdemosetup.codeplex.com/       2. Download Instructions and TFSDemo_<version>      3. Follow the steps in the Instructions.txt file      4. Unzip TFSDemo_<version> and open the target folder. Two important files in this folder, DemoDictionary.xml – This file contains the settings using which the demo environment will be setup SetupTfsDemo.exe – This will run the TFS demo environment setup application       Step 3 – Configure the setup (i.e. team name, members, sprint dates, etc) 1. Open up DemoDictionary.xml      2. Walkthrough DemoDictionary.xml             a. Basic Team Details         <Name> – Specify the name of the team         <Description> – Specify a description to go with the team         <SetAsDefaultTeam> – This accepts a value “true/false” when set to true, the newly created team will be set as the default team in the project         <BacklogIterationPath> – Specify a backlog iteration path for the team     b. Iterations – The iterations you specify here will be set as the Teams iterations        <Iterations> – Accepts multiple <Iteration> nodes.        <Iteration> – This is the most granular level of an Iteration        <Path> – The path to the sprint, sample values, Release 1\Sprint 1 or Release 2\Sprint 2        <StartDate> – The sprint start date, this accepts the format yyyy-MM-dd        <FinishDate> – The sprint finish date, this accepts the format yyyy-MM-dd     c. Team Members – Team Members that need to be added to the newly created team will be added under this section         <TeamMembers> – Accepts multiple <TeamMember> nodes.         <TeamMember> – This is the most granular level of a Team Member         <User> – This accepts the username, if you are running this against TFSPreview then the live id of the user will need to be passed. If you are running this against TFS Server then the user id i.e. Domain\UserName will need to be passed          <Team> – Specify the name of the team that you want the user to be assigned to.     d. WorkItems – This section will allow you to add work items (product backlog Items and linked tasks) to the current sprint of the team         <WorkItems> – Accepts multiple <WorkItem> nodes.         <WorkItem> – Accepts one <ProductBacklogItem> and multiple <Task> nodes         <ProductBacklogItem> – Used to create a Product Backlog Item type work item               <Title> – The title of the Product Backlog Item               <Description> – The description of the Product Backlog Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <Effort> – The total effort required to complete the Product Backlog Item         <Task> – Used to create a linked task to the Product Backlog type work item               <Title> – The title of the task type work item               <Description> – The description of the Task Type Work Item               <AssignedTo> – Used to assign the work item to a team member. The team member name or email address can be passed.               <RemainingWork> – The remaining effort to complete the task type work item Step 4 – Setup the demo environment against the newly created Team Project 1. Run SetupTfsDemo.exe    2. Enter Y or y on the prompt to continue setting up TFS Demo setup.     3. Select the newly created Team project, for this blogpost I had created the Team Project – AdventureWorks, so that is what I’ll select in the Connect to TFS Server pop up    3. Click Connect and follow the messages that are written to the console application       Step 5 – Validate that the Demo environment is set up as per the configuration 1. The team web access is all lit up You have a Sprint, a burn down chart, team members…    2. The team Demo has been added and has been set up as the default team    3. The Sprint Backlog Iteration path, Sprints and Sprint start and finish dates have been set    4. The default area path has been setup    5. Taskboard – Backlog items view    6. Taskboard – Team members view      Step 6 – Exception Handling! 1. This solution has been tested against TFS 2012 Service/Server for the Scrum 2.1 process template. 2. You are likely to run into an exception if you mess up the config file 3. If the team already exists and you run the console app to set up the team (with the same name) you will run into exceptions. Please remember this is just an alpha release, if you have any feedback please leave a comment! Didn’t I say that it would just take 1 minute, Enjoy!

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • FAQ: Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready event. This event will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor() function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • How to set Grub to automatically load Xen kernel

    - by Cerin
    How do you configure Grub to automatically use the Xen kernel under Ubuntu 11.10? No matter what I do, it loads the first menuentry. The only way I can get it to load Xen is to manually select the kernel, which I can't do if I have to reboot the server remotely, or there's a power failure and the machine automatically boots up when power's restored, etc. It's driving me nuts. In my /boot/grub/grub.cfg, the Xen kernel is at index 4 (i.e. it's the 5th menuentry). So I've tried: Setting GRUB_DEFAULT=4, and running sudo update-grub Setting GRUB_DEFAULT=saved and GRUB_SAVEDEFAULT=true, and running sudo update-grub Setting GRUB_DEFAULT="Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server", and running sudo update-grub None of these work. It continues to load the first menuentry, which is "Ubuntu, with Linux 3.0.0-16-server". Below is my current /boot/grub/grub.cfg. What am I doing wrong? # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=2 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.0.0-16-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux /boot/vmlinuz-3.0.0-16-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro initrd /boot/initrd.img-3.0.0-16-server } menuentry 'Ubuntu, with Linux 3.0.0-16-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Linux 3.0.0-16-server ...' linux /boot/vmlinuz-3.0.0-16-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-16-server } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.0.0-12-server' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux /boot/vmlinuz-3.0.0-12-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro initrd /boot/initrd.img-3.0.0-12-server } menuentry 'Ubuntu, with Linux 3.0.0-12-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Linux 3.0.0-12-server ...' linux /boot/vmlinuz-3.0.0-12-server root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-12-server } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### submenu "Xen 4.1-amd64" { menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-16-server ...' module /boot/vmlinuz-3.0.0-16-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-16-server } menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-16-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-16-server ...' module /boot/vmlinuz-3.0.0-16-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro single echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-16-server } menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-12-server' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-12-server ...' module /boot/vmlinuz-3.0.0-12-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-12-server } menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.0.0-12-server (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac echo 'Loading Xen 4.1-amd64 ...' multiboot /boot/xen-4.1-amd64.gz placeholder echo 'Loading Linux 3.0.0-12-server ...' module /boot/vmlinuz-3.0.0-12-server placeholder root=UUID=d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac ro single echo 'Loading initial ramdisk ...' module /boot/initrd.img-3.0.0-12-server } } ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod raid insmod mdraid1x insmod part_msdos insmod part_msdos insmod ext2 set root='(mduuid/be73165bc31d6f5cd00d05036c7b964f)' search --no-floppy --fs-uuid --set=root d72bad3f-9ed7-44b9-b3d1-d7af9f62a8ac linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Normal map applied as diffuse textures looks wrong

    - by KaiserJohaan
    Diffuse textures works fine, but I am having problem with normal maps, so I thought I'd tried to apply the normal maps as the diffuse map in my fragment shader so I could see everything is OK. I comment-out my normal map code and just set the diffuse map to the normal map and I get this: http://postimg.org/image/j9gudjl7r/ Looks like a smurf! This is the actual normal map of the main body: http://postimg.org/image/sbkyr6fg9/ Here is my fragment shader, notice I commented out normal map code so I could debug the normal map as a diffuse texture "#version 330 \n \ \n \ layout(std140) uniform; \n \ \n \ const int MAX_LIGHTS = 8; \n \ \n \ struct Light \n \ { \n \ vec4 mLightColor; \n \ vec4 mLightPosition; \n \ vec4 mLightDirection; \n \ \n \ int mLightType; \n \ float mLightIntensity; \n \ float mLightRadius; \n \ float mMaxDistance; \n \ }; \n \ \n \ uniform UnifLighting \n \ { \n \ vec4 mGamma; \n \ vec3 mViewDirection; \n \ int mNumLights; \n \ \n \ Light mLights[MAX_LIGHTS]; \n \ } Lighting; \n \ \n \ uniform UnifMaterial \n \ { \n \ vec4 mDiffuseColor; \n \ vec4 mAmbientColor; \n \ vec4 mSpecularColor; \n \ vec4 mEmissiveColor; \n \ \n \ bool mHasDiffuseTexture; \n \ bool mHasNormalTexture; \n \ bool mLightingEnabled; \n \ float mSpecularShininess; \n \ } Material; \n \ \n \ uniform sampler2D unifDiffuseTexture; \n \ uniform sampler2D unifNormalTexture; \n \ \n \ in vec3 frag_position; \n \ in vec3 frag_normal; \n \ in vec2 frag_texcoord; \n \ in vec3 frag_tangent; \n \ in vec3 frag_bitangent; \n \ \n \ out vec4 finalColor; " " \n \ \n \ void CalcGaussianSpecular(in vec3 dirToLight, in vec3 normal, out float gaussianTerm) \n \ { \n \ vec3 viewDirection = normalize(Lighting.mViewDirection); \n \ vec3 halfAngle = normalize(dirToLight + viewDirection); \n \ \n \ float angleNormalHalf = acos(dot(halfAngle, normalize(normal))); \n \ float exponent = angleNormalHalf / Material.mSpecularShininess; \n \ exponent = -(exponent * exponent); \n \ \n \ gaussianTerm = exp(exponent); \n \ } \n \ \n \ vec4 CalculateLighting(in Light light, in vec4 diffuseTexture, in vec3 normal) \n \ { \n \ if (light.mLightType == 1) // point light \n \ { \n \ vec3 positionDiff = light.mLightPosition.xyz - frag_position; \n \ float dist = max(length(positionDiff) - light.mLightRadius, 0); \n \ \n \ float attenuation = 1 / ((dist/light.mLightRadius + 1) * (dist/light.mLightRadius + 1)); \n \ attenuation = max((attenuation - light.mMaxDistance) / (1 - light.mMaxDistance), 0); \n \ \n \ vec3 dirToLight = normalize(positionDiff); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (attenuation * angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (attenuation * gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 2) // directional light \n \ { \n \ vec3 dirToLight = normalize(light.mLightDirection.xyz); \n \ float angleNormal = clamp(dot(normalize(normal), dirToLight), 0, 1); \n \ \n \ float gaussianTerm = 0.0; \n \ if (angleNormal > 0.0) \n \ CalcGaussianSpecular(dirToLight, normal, gaussianTerm); \n \ \n \ return diffuseTexture * (angleNormal * Material.mDiffuseColor * light.mLightIntensity * light.mLightColor) + \n \ (gaussianTerm * Material.mSpecularColor * light.mLightIntensity * light.mLightColor); \n \ } \n \ else if (light.mLightType == 4) // ambient light \n \ return diffuseTexture * Material.mAmbientColor * light.mLightIntensity * light.mLightColor; \n \ else \n \ return vec4(0.0); \n \ } \n \ \n \ void main() \n \ { \n \ vec4 diffuseTexture = vec4(1.0); \n \ if (Material.mHasDiffuseTexture) \n \ diffuseTexture = texture(unifDiffuseTexture, frag_texcoord); \n \ \n \ vec3 normal = frag_normal; \n \ if (Material.mHasNormalTexture) \n \ { \n \ diffuseTexture = vec4(normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0), 1.0); \n \ // vec3 normalTangentSpace = normalize(texture(unifNormalTexture, frag_texcoord).xyz * 2.0 - 1.0); \n \ //mat3 tangentToWorldSpace = mat3(normalize(frag_tangent), normalize(frag_bitangent), normalize(frag_normal)); \n \ \n \ // normal = tangentToWorldSpace * normalTangentSpace; \n \ } \n \ \n \ if (Material.mLightingEnabled) \n \ { \n \ vec4 accumLighting = vec4(0.0); \n \ \n \ for (int lightIndex = 0; lightIndex < Lighting.mNumLights; lightIndex++) \n \ accumLighting += Material.mEmissiveColor * diffuseTexture + \n \ CalculateLighting(Lighting.mLights[lightIndex], diffuseTexture, normal); \n \ \n \ finalColor = pow(accumLighting, Lighting.mGamma); \n \ } \n \ else { \n \ finalColor = pow(diffuseTexture, Lighting.mGamma); \n \ } \n \ } \n"; Here is my wrapper around a texture OpenGLTexture::OpenGLTexture(const std::vector<uint8_t>& textureData, uint32_t textureWidth, uint32_t textureHeight, TextureFormat textureFormat, TextureType textureType, Logger& logger) : mLogger(logger), mTextureID(gNextTextureID++), mTextureType(textureType) { glGenTextures(1, &mTexture); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, mTexture); CHECK_GL_ERROR(mLogger); GLint glTextureFormat = (textureFormat == TextureFormat::TEXTURE_FORMAT_RGB ? GL_RGB : textureFormat == TextureFormat::TEXTURE_FORMAT_RGBA ? GL_RGBA : GL_RED); glTexImage2D(GL_TEXTURE_2D, 0, glTextureFormat, textureWidth, textureHeight, 0, glTextureFormat, GL_UNSIGNED_BYTE, &textureData[0]); CHECK_GL_ERROR(mLogger); glGenerateMipmap(GL_TEXTURE_2D); CHECK_GL_ERROR(mLogger); glBindTexture(GL_TEXTURE_2D, 0); CHECK_GL_ERROR(mLogger); } OpenGLTexture::~OpenGLTexture() { glDeleteBuffers(1, &mTexture); CHECK_GL_ERROR(mLogger); } And here is the sampler I create which is shared between Diffuse and normal textures // texture sampler setup glGenSamplers(1, &mTextureSampler); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_S, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameteri(mTextureSampler, GL_TEXTURE_WRAP_T, GL_REPEAT); CHECK_GL_ERROR(mLogger); glSamplerParameterf(mTextureSampler, GL_TEXTURE_MAX_ANISOTROPY_EXT, mCurrentAnisotropy); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifDiffuseTexture"), OpenGLTexture::TEXTURE_UNIT_DIFFUSE); CHECK_GL_ERROR(mLogger); glUniform1i(glGetUniformLocation(mDefaultProgram.GetHandle(), "unifNormalTexture"), OpenGLTexture::TEXTURE_UNIT_NORMAL); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_DIFFUSE, mTextureSampler); CHECK_GL_ERROR(mLogger); glBindSampler(OpenGLTexture::TEXTURE_UNIT_NORMAL, mTextureSampler); CHECK_GL_ERROR(mLogger); SetAnisotropicFiltering(mCurrentAnisotropy); The diffuse textures looks like they should, but the normal looks so wierd. Why is this?

    Read the article

  • Refactoring FizzBuzz

    - by MarkPearl
    A few years ago I blogger about FizzBuzz, at the time the post was prompted by Scott Hanselman who had podcasted about how surprized he was that some programmers could not even solve the FizzBuzz problem within a reasonable period of time during a job interview. At the time I thought I would give the problem a go in F# and sure enough the solution was fairly simple – I then also did a basic solution in C# but never posted it. Since then I have learned that being able to solve a problem and how you solve the problem are two totally different things. Today I decided to give the problem a retry and see if I had learnt anything new in the last year or so. Here is how my solution looked after refactoring… Solution 1 – Cheap and Nasty public class FizzBuzzCalculator { public string NumberFormat(int number) { var numDivisibleBy3 = (number % 3) == 0; var numDivisibleBy5 = (number % 5) == 0; if (numDivisibleBy3 && numDivisibleBy5) return String.Format("{0} FizzBuz", number); else if (numDivisibleBy3) return String.Format("{0} Fizz", number); else if (numDivisibleBy5) return String.Format("{0} Buz", number); return number.ToString(); } } class Program { static void Main(string[] args) { var fizzBuzz = new FizzBuzzCalculator(); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } } } My first attempt I just looked at solving the problem – it works, and could be an acceptable solution but tonight I thought I would see how far  I could refactor it… The section I decided to focus on was the mass of if..else code in the NumberFormat method. Solution 2 – Replacing If…Else with a Dictionary public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings) { _mappings = mappings; } public string NumberFormat(int number) { var numDivisibleBy3 = (number % 3) == 0; var numDivisibleBy5 = (number % 5) == 0; var mappedKey = new Tuple<bool, bool>(numDivisibleBy3, numDivisibleBy5); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } } In my second attempt I looked at removing the if else in the NumberFormat method. A dictionary proved to be useful for this – I added a constructor to the class and injected the dictionary mapping. One could argue that this is totally overkill, but if I was going to use this code in a large system an approach like this makes it easy to put this data in a configuration file, which would up its OC (Open for extensibility, closed for modification principle). I could of course take the OC principle even further – the check for divisibility by 3 and 5 is tightly coupled to this class. If I wanted to make it 4 instead of 3, I would need to adjust this class. This introduces my third refactoring. Solution 3 – Introducing Delegates and Injecting them into the class public delegate bool FizzBuzzComparison(int number); public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; private readonly FizzBuzzComparison _comparison1; private readonly FizzBuzzComparison _comparison2; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings, FizzBuzzComparison comparison1, FizzBuzzComparison comparison2) { _mappings = mappings; _comparison1 = comparison1; _comparison2 = comparison2; } public string NumberFormat(int number) { var mappedKey = new Tuple<bool, bool>(_comparison1(number), _comparison2(number)); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { private static bool DivisibleByNum(int number, int divisor) { return number % divisor == 0; } public static bool Divisibleby3(int number) { return number % 3 == 0; } public static bool Divisibleby5(int number) { return number % 5 == 0; } static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings, Divisibleby3, Divisibleby5); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } } I have taken this one step further and introduced delegates that are injected into the FizzBuzz Calculator class, from an OC principle perspective it has probably made it more compliant than the previous Solution 2, but there seems to be a lot of noise. Anonymous Delegates increase the readability level, which is what I have done in Solution 4. Solution 4 – Anon Delegates public delegate bool FizzBuzzComparison(int number); public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; private readonly FizzBuzzComparison _comparison1; private readonly FizzBuzzComparison _comparison2; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings, FizzBuzzComparison comparison1, FizzBuzzComparison comparison2) { _mappings = mappings; _comparison1 = comparison1; _comparison2 = comparison2; } public string NumberFormat(int number) { var mappedKey = new Tuple<bool, bool>(_comparison1(number), _comparison2(number)); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings, (n) => n % 3 == 0, (n) => n % 5 == 0); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } }   Using the anonymous delegates I think the noise level has now been reduced. This is where I am going to end this post, I have gone through 4 iterations of the code from the initial solution using If..Else to delegates and dictionaries. I think each approach would have it’s pro’s and con’s and depending on the intention of where the code would be used would be a large determining factor. If you can think of an alternative way to do FizzBuzz, add a comment!

    Read the article

  • RightNow CX @ OpenWorld: What to Experience

    - by Tony Berk
    We want to welcome our RightNow CX customers to Oracle OpenWorld next week. Get ready for a great week and a whole new experience! For a high level overview of what is going on during the week, please review these previous posts: Is There a Cloud Over OpenWorld? and What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12. Also, don't forget you can add on the Customer Experience Summit @ OpenWorld to make your week even more complete and get involved with the Experience Revolution! Below is a highlight of only some of the RightNow related sessions at OpenWorld. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. No better way to start off than hearing where Oracle RightNow is going! Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from David Vap and his team of Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Interested in the Cloud and want to know why some leading CIOs are moving to the cloud? You can hear first hand from CIOs from Emerson, Intuit and Overstock.com: CIOs and Governance in the Cloud (CON9767) - Oct 3, 11:45 AM.   And of course there are a number of sessions that drill down into more specific areas. Here are just a few: Deliver Outstanding Customer Experiences: Oracle RightNow Dynamic Agent Desktop Cloud Service (CON9771) - Oct 1, 4:45 PM. This session covers how companies have delivered exceptional customer experiences and how the Oracle RightNow Dynamic Agent Desktop Cloud Service roadmap will evolve in the future. The Oracle RightNow Contact Center Experience suite includes incident management, knowledge, guided processes, and other service capabilities to unify the customer experience across channels. Come learn about the powerful tools that enable even your junior agents to consistently provide outstanding service across all customer interaction channels. Self-Service in the Age of Data Intimacy (CON11516) - Oct 1, 3:15. Even though businesses are generating more and more data around their relationships and interactions with customers, very little of the information a business generates ends up available to the contact center and even less is made available to the online service experience. The generic one-size-fits-all approach that typifies most online service experiences ultimately fails to address all user needs, and that failure ultimately leads to the continued use of high-cost agent-assisted channels for low-value interactions. This session introduces Oracle RightNow Web Experience’s Virtual Assistant and discusses how you can deliver rich, engaging, highly personalized experiences with the quality of agent-assisted service at a much lower cost. Improve Chat Experiences: Best Practices for Chat Pilots and Deployments (CON11517) - Oct 1, 4:45 PM. Today’s organizations are challenged to grow revenue and retain customers with fewer resources, and many have turned to chat as an approach to improving the customer experience, increasing sales conversions, and reducing costs at the same time. From setting goals and metrics and training staff to customizing and tuning the solution, this session provides best practices and lessons learned from a broad set of implementations to help you get the most out of your chat solution. Differentiated Experience with Web Service (CON9770) - Oct 2, 1:15 PM. A reputation for excellent customer service can differentiate your brand and drive revenue. In this session, learn how to develop that reputation by transforming your online self-service into a highly interactive, branded customer experience. See live examples of how Oracle RightNow Web Experience has helped customers deliver on their Web service strategies. Unifying the Agent’s Engagement Console (CON11518) - Oct 2, 1:15 PM. Does your customer experience suffer because your agents are toggling between multiple tools? Do your agent productivity and morale suffer as well? Come to this session to learn how Oracle RightNow CX Cloud Service seamlessly unifies these disparate systems into a single engagement console. Regardless of channel, powerful adaptive tools consistently guide agents across contextually aware personalized workflows. Great agent experiences drive great customer experiences. Oracle RightNow CX Cloud Service and the Oracle Customer Experience Portfolio (CON9775) - Oct 3, 10:15 AM. This session covers how Oracle’s integrated suite of customer experience (CX) products fits with the Oracle CX portfolio of products (Oracle Fusion Customer Relationship Management; the Oracle ATG, Oracle Endeca, and Oracle Knowledge product families; and Oracle Business Intelligence) to increase revenues, strengthen customer relationships, and reduce costs across the entire end-to-end customer lifecycle for companies that sell to consumers and those that sell to businesses. Greater Insights from Customer Engagements (CON9773) Oct 4, 12:45 PM. In this session, hear how to leverage service interaction insights, customer feedback, and segmented service engagements to improve the customer experience. Discover how customers, such as J&P Cycles, learn and take action based on business insights gained through their customer engagements. Again, these are just some of the sessions, so check out the Content Catalog for details on Knowledge Management, Customization, Integration and more in the Oracle Develop stream for Customer Experience. Be sure to visit the Oracle DEMOgrounds in the Moscone West Exhibit Hall. If this is your first OpenWorld, welcome! If you are returning, hi again and enjoy!

    Read the article

  • Atheros Wireless card shows up as two different models?

    - by geermc4
    Hi I've been fighting these wireless drivers for a few days and just recently i noticed that the model the Wireless controller appears in lspci is different sometimes. This is the data i have after installing Ubuntu Server 64 bit ~# lspci -k .... 04:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: AzureWave Device 1d89 Kernel driver in use: ath9k Kernel modules: ath9k ran some updates, restarted, all was good, all though it did say that linux-headers-server linux-image-server linux-server where beeing kept back. After that i installed ubuntu-desktop (aptitude install ubuntu-desktop --without-recommends) restarted and not only is the wireless not working anymore, but the hardware is listed as a different card ~# lspci -k .... 04:00.0 Ethernet controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) has no available drivers for it, still i tried to modprobe ath9k, they show up in lsmod as loaded, but still iw list shows nothing. this is what it looked like before the ubuntu-desktop instalation Wiphy phy0 Band 1: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-7 Frequencies: * 2412 MHz [1] (14.0 dBm) * 2417 MHz [2] (15.0 dBm) * 2422 MHz [3] (15.0 dBm) * 2427 MHz [4] (15.0 dBm) * 2432 MHz [5] (15.0 dBm) * 2437 MHz [6] (15.0 dBm) * 2442 MHz [7] (15.0 dBm) * 2447 MHz [8] (15.0 dBm) * 2452 MHz [9] (15.0 dBm) * 2457 MHz [10] (15.0 dBm) * 2462 MHz [11] (15.0 dBm) * 2467 MHz [12] (15.0 dBm) (passive scanning) * 2472 MHz [13] (14.0 dBm) (passive scanning) * 2484 MHz [14] (17.0 dBm) (passive scanning) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) * CMAC (00-0f-ac:6) Available Antennas: TX 0x1 RX 0x3 Configured Antennas: TX 0x1 RX 0x3 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point * P2P-client * P2P-GO software interface modes (can always be added): * AP/VLAN * monitor interface combinations are not supported Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * remain_on_channel * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * connect * disconnect Supported TX frame types: * IBSS: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * managed: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP/VLAN: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * mesh point: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-client: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-GO: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 Supported RX frame types: * IBSS: 0x00d0 * managed: 0x0040 0x00d0 * AP: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * AP/VLAN: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * mesh point: 0x00b0 0x00c0 0x00d0 * P2P-client: 0x0040 0x00d0 * P2P-GO: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 Device supports RSN-IBSS. What's with the hardware change? If it has 2, how can i make the AR9285 always load and disable AR5008, or, is it the same and it's just showing it different? :| Oh and I've tried this on Ubuntu 10.04 server, xubuntu 12.04, ubuntu 12.04 desktop and server. Thanks in advanced. -- Here's some more info, i have it setup in 2 hard drives, 1 works and the other one i'm using to figure it out The one that works... # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.147 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 01 serial: 74:2f:68:4a:26:73 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-18-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:fea00000-fea0ffff Here's where it doesn't # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.160 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network UNCLAIMED description: Ethernet controller product: AR5008 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:fea00000-fea0ffff Update I've noticed that if i blacklist the ath9k and ath9k_common modules lspci gives me the AR9285, but then I need to modprobe ath9k for it to work, does this make any sense? If so, why?

    Read the article

  • Gamification at OOW

    - by erikanollwebb
    Last week was Oracle OpenWorld, and for those of you not in tech or downtown San Francisco, that might not mean a whole lot.  However, if you are familiar with it, Oracle OpenWorld is our premier customer event.  This year, more than 50,000 people attended.  It's not a good week to visit San Francisco on vacation because Oracle customers take over all the hotels in town!  It was crazy, but a lot of fun and it's a great opportunity for the Apps UX group to do customer research with a range of customers.  This year, more than 100+ customers and partners took the time to team up with our UX experts and provide feedback on new designs and ideas. Over three days,  UX teams conducted 8  one-on-one user feedback sessions, 4 focus groups and 7 surveys. In addition, we conducted a voice capture activity and were able to collect close to 70 speech samples at the lab and DEMOgrounds. This was a great opportunity for us to do some testing on some specific gamification concepts with a set of business analysts.  We pulled in 8 folks for a focus group on gamification concepts and whether they thought those would work for their teams. To get ready for this, my designer extraordinaire, Andrea Cantú, flew into town and we spent almost a week locked in a room together brainstorming design ideas.  We killed a few trees trying to get all of our concepts and other examples together in the process, but in the end, we put together a whole series of examples of how you might gamify an Oracle app (in this case, CRM).  Andrea is a genius for this kind of thing and the comps she created looked great.  Here's a picture of her hard at work!  We also had the good fortune to have my boss, Laurie Pattison and my usability contractor, Shobana Subramanian there to note take and observe as well.  Here's a few shots of us, hard at work preparing for the day (or checking out something on Laurie's iPhone...) To start things off, we gave an overview of gamification and I talked about what it's used for.  Then we gave the participants a scenario about our sales person and what we were trying to get her to do. It was a great opportunity to highlight what our business goals might be and why we might want to add game mechanics.  It was also a good way to get them thinking about how that might work for them in their environments and workplaces. There were some surprises for the day.  We asked how many of them were already familiar with the concept of gamification--only two people had heard of it and only one was using game mechanics in his work.  That's in contrast to a survey we just ran internally with folks in a dev org where almost 50% of about 450 respondents had heard of gamification.  As we discussed the ways game mechanics could be used, it became clear that many of the folks had seen some game mechanics in action but didn't know that's what they were.  We also noticed that the folks in this group felt that if they were trying to sell the concept in their orgs, they wouldn't call it gamification.  That's not a huge surprise to me--they said what we've heard in the past, that gamification does not seem like a serious term for enterprise software.  They said they'd sell it with the goals--as a means to increase behaviors by rewarding users for activities.  It's a funny problem.  The word puts some folks off, but at the same time, I haven't seen another one word description that quite captures the range of things that "gamification" can cover.  My guess is that the more mainstream the term becomes, the more desensitized we'll become to the idea the it's trivializing enterprise software in some way.  Still, it was interesting to note that this group still felt that they would not take this concept to their bosses or teams and call it "gamification".  They focused on the goals, and how we could incentivize desired behaviors with game mechanics.  As I have already stated in other posts, I feel like my org is more receptive to discussing how this is just a more transparent type of usability and user experience methods than talking about gamification.  That's the argument they said they would use. All in all, it was a good session.  I love getting to talk to customers, present ideas and concepts, and get their feedback and input.  It's the type of thing that really helps drive our designs and keeps us grounded in what our customers need/want.  We're already planning where to get more feedback opportunities in the coming months. 

    Read the article

  • Run Oracle E-Business Suite Period Close Diagnostic

    - by Get Proactive Customer Adoption Team
    Untitled Document Be Proactive & Save Time—Use the Period Close Diagnostic During the Month Have you ever closed your books at the end of the month and, due to problems with your Oracle E-Business Suite Period Close, you found yourself working all night or all weekend to resolve your issues? You can avoid issues by running the Oracle E-Business Suite Period Close Diagnostics throughout the month, prior to closing Oracle Financial Assets, General Ledger, Payables, and/or Receivables. You can identify issues that will interfere with your period close early, preventing last minute fire drills. Correct your errors or, if you need Oracle Support’s assistance, attach the output to a service request for faster resolution by the support engineer. Oracle E-Business Suite Diagnostics are included in your Oracle Premier Support agreement at no extra charge. They are proactive, easy to use, tools provided by Oracle Support to ease the gathering and analyzing of information from your E-Business Suite, specific to an existing issue or setup. Formatted output displays the information gathered, the findings of the analysis, and the appropriate actions to take if necessary. These tools are designed for both the functional and technical user, providing no EBS administration features, so you can safely assign this responsibility to users who are not administrators. A good place to start with the Support Diagnostics is the install patch Note 167000.1. Everything you need is in this patch and you install it on top of your E-Business Suite. If you are on EBS 12.0.6 or below, Oracle delivers the diagnostic tests in a standard Oracle patch and you apply it using the adpatch utility. If you are on EBS release 12.1.1 or above, your diagnostics are already there. Oracle E-Business Suite Diagnostics: Prevent Issues—resolving configuration and data issues that would cause processes to fail Identify Issues Quickly—resolving problems without the need to contact Oracle Support Reduce Resolution Time—minimizing the time spent to resolve an issue by increasing support engineer efficiency In the example below, you will see how to run the EBS Period Close Diagnostic step-by-step using an SQLGL Period Closing Activity Test. This allows you to check throughout the month to identify and resolve any issue that might prevent closing the period in the General Ledger on schedule.   Click the Select Application button. Select your Application. In this example, we will use the Period Close test. Scroll down to Period Close Place a check mark in the Period Closing box in the Select column. Click the Execute button at the bottom of the page Input the parameters. Click the Submit button Click the Refresh button, until the Status of the test changes from “In Progress” to “Completed” Click the icon under, View Report to view the test results   The report will complete successfully or show completed with errors. The report will show where the error is located, what the error is, and what action(s) to take for resolution. Remember, if you need to work with Oracle Support to resolve your issue, attach the report to your Service Request so the engineer can start working the issue. Completed with errors Completed successfully with no errors If you have questions, please ask in the E-Business Suite Category’s Diagnostic Tools Community. You may find the answer waiting for you in a prior community discussion or in one of the resources posted by an Oracle Support moderator. Oracle’s Period Close Diagnostic, and the other E-Business Suite Diagnostics, save you time and help keep you on schedule. If you run the Period Close Diagnostic throughout the month, you can identify issues to resolve and get help, if needed. When opening a Service Request, attaching the output from the diagnostic report, speeds resolution. With the issues resolved ahead of time, your Period Close should complete without errors. Avoiding the unexpected, helps to close your books on time and without late nights or working through your weekend. Recommended Reads E-Business Suite Diagnostics Period / Year End Close [ID 402237.1] lists all of the Closing Period Diagnostic Tests. I highly recommend that customers execute these tests prior to closing a period. The period closing tests listed in this document help you identify known issues that prevent a successful period close. Use these tests prior to closing a period. To learn about all the available EBS Diagnostics, please review the E-Business Suite Diagnostics Overview [ID 342459.1].

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

  • FAQ&ndash;Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready function. This function will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor) function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks&ndash;Part 1: Extensions

    - by ToStringTheory
    I don’t know about you, but when it comes to development, I prefer my environment to be as free of clutter as possible.  It may surprise you to know that I have tried ReSharper, and did not like it, for the reason that I stated above.  In my opinion, it had too much clutter.  Don’t get me wrong, there were a couple of features that I did like about it (inversion of if blocks, code feedback), but for the most part, I actually felt that it was slowing me down. Introduction Another large factor besides intrusiveness/speed in my choice to dislike ReSharper would probably be that I have become comfortable with my current setup and extensions.  I believe I have a good collection, and am quite happy with what I can accomplish in a short amount of time.  I figured that I would share some of my tips/findings regarding Visual Studio productivity here, and see what you had to say. The first section of things that I would like to cover, are Visual Studio Extensions.  In case you have been living under a rock for the past several years, Extensions are available under the Tools menu in Visual Studio: The extension manager enables integrated access to the Microsoft Visual Studio Gallery online with access to a few thousand different extensions.  I have tried many extensions, but for reasons of lack reliability, usability, or features, have uninstalled almost all of them.  However, I have come across several that I find I can not do without anymore: NuGet Package Manager (Microsoft) Perspectives (Adam Driscoll) Productivity Power Tools (Microsoft) Web Essentials (Mads Kristensen) Extensions NuGet Package Manager To be honest, I debated on whether or not to put this in here.  Most people seem to have it, however, there was a time when I didn’t, and was always confused when blogs/posts would say to right click and “Add Package Reference…” which with one of the latest updates is now “Manage NuGet Packages”.  So, if you haven’t downloaded the NuGet Package Manager yet, or don’t know what it is, I would highly suggest downloading it now! Features Simply put, the NuGet Package Manager gives you a GUI and command line to access different libraries that have been uploaded to NuGet. Some of its features include: Ability to search NuGet for packages via the GUI, with information in the detail bar on the right. Quick access to see what packages are in a solution, and what packages have updates available, with easy 1-click updating. If you download a package that requires references to work on other NuGet packages, they will be downloaded and referenced automatically. Productivity Tip If you use any type of source control in Visual Studio as well as using NuGet packages, be sure to right-click on the solution and click "Enable NuGet Package Restore". What this does is add a NuGet package to the solution so that it will be checked in along side your solution, as well as automatically grab packages from NuGet on build if needed. This is an extremely simple system to use to manage your package references, instead of having to manually go into TFS and add the Packages folder. Perspectives I can't stand developing with just one monitor. Especially if it comes to debugging. The great thing about Visual Studio 2010, is that all of the panels and windows are floatable, and can dock to other screens. The only bad thing is, I don't use the same toolset with everything that I am doing. By this, I mean that I don't use all of the same windows for debugging a web application, as I do for coding a WPF application. Only thing is, Visual Studio doesn't save the screen positions for all of the undocked windows. So, I got curious one day and decided to check and see if there was an extension to help out. This is where I found Perspectives. Features Perspectives gives you the ability to configure window positions across any or your monitors, and then to save the positions in a profile. Perspectives offers a Panel to manage different presets/favorites, and a toolbar to add to the toolbars at the top of Visual Studio. Ability to 'Favorite' a profile to add it to the perspectives toolbar. Productivity Tip Take the time to setup profiles for each of your scenarios - debugging web/winforms/xaml, coding, maintenance, etc. Try to remember to use the profiles for a few days, and at the end of a week, you may find that your productivity was never better. Productivity Power Tools Ah, the Productivity Power Tools... Quite possibly one of my most used extensions, if not my most used. The tool pack gives you a variety of enhancements ranging from key shortcuts, interface tweaks, and completely new features to Visual Studio 2010. Features I don't want to bore you with all of the features here, so here are my favorite: Quick Find - Unobtrusive search box in upper-right corner of the code window. Great for searching in general, especially in a file. Solution Navigator - The 'Solution Explorer' on steroids. Easy to search for files, see defined members/properties/methods in files, and my favorite feature is the 'set as root' option. Updated 'Add Reference...' Dialog - This is probably my favorite enhancement period... The 'Add Reference...' dialog redone in a manner that resembles the Extension/Package managers. I especially love the ability to search through all of the references. "Ctrl - Click" for Definition - I am still getting used to this as I usually try to use my keyboard for everything, but I love the ability to hold Ctrl and turn property/methods/variables into hyperlinks, that you click on to see their definitions. Great for travelling down a rabbit hole in an application to research problems. While there are other commands/utilities, I find these to be the ones that I lean on the most for the usefulness. Web Essentials If you have do any type of web development in ASP .Net, ASP .Net MVC, even HTML, I highly suggest grabbing the Web Essentials right NOW! This extension alone is great for productivity in web development, and greatly decreases my development time on new features. Features Some of its best features include: CSS Previews - I say 'previews' because of the multiple kinds of previews in CSS that you get font-family, color, background/background-image previews. This is great for just tweaking UI slightly in different ways and seeing how they look in the CSS window at a glance. Live Preview - One word - awesome! This goes well with my multi-monitor setup. I put the site on one monitor in a Live Preview panel, and then as I make changes to CSS/cshtml/aspx/html, the preview window will update with each save/build automatically. For CSS, you can even turn on live-update, so as you are tweaking CSS, the style changes in real time. Great for tweaking colors or font-sizes. Outlining - Small, but I like to be able to collapse regions/declarations that are in the way of new work, or are just distracting. Commenting Shortcuts - I don't know why it wasn't included by default, but it is nice to have the key shortcuts for commenting working in the CSS editor as well. Productivity Tip When working on a site, hit CTRL-ALT-ENTER to launch the Live Preview window. Dock it to another monitor. When you make changes to the document/css, just save and glance at the other monitor. No need to alt tab, then alt tab before continuing editing. Conclusion These extensions are only the most useful and least intrusive - ones that I use every day. The great thing about Visual Studio 2010 is the extensibility options that it gives developers to utilize. Have an extension that you use that isn't intrusive, but isn't listed here? Please, feel free to comment. I love trying new things, and am always looking for new additions to my toolset of the most useful. Finally, please keep an eye out for Part 2 on key shortcuts in Visual Studio. Also, if you are visiting my site (http://tostringtheory.com || http://geekswithblogs.net/tostringtheory) from an actual browser and not a feed, please let me know what you think of the new styling!

    Read the article

  • Can grub handle same release (3.6) but new rc (rc5)?

    - by hhoyt
    can grub handle newer kerner rc ? I am running 3.6.0-rc4 ok, grub update definitely recognizes all required files for rc5, but edit of grub.cfg only shows rc4 after grub-update. D/N matter whether I generate kernel 3.6.0-rc5 or whether I install the .deb files. Generating grub.cfg ... using custom appearance settings Found background image: /usr/share/peppermint/wallpapers/Peppermint.jpg Found linux image: /boot/vmlinuz-3.6.0-030600rc5-generic Found linux image: /boot/vmlinuz-3.6.0-030600rc4-generic Found initrd image: /boot/initrd.img-3.6.0-030600rc4-generic Found linux image: /boot/vmlinuz-3.6.0-rc5 Found initrd image: /boot/initrd.img-3.6.0-rc5 Found linux image: /boot/vmlinuz-3.6.0-rc5.old Found initrd image: /boot/initrd.img-3.6.0-rc5 Found linux image: /boot/vmlinuz-3.5.3 Found initrd image: /boot/initrd.img-3.5.3 Found linux image: /boot/vmlinuz-3.5.3.old Found initrd image: /boot/initrd.img-3.5.3 Found linux image: /boot/vmlinuz-3.5.0-13-generic Found initrd image: /boot/initrd.img-3.5.0-13-generic Found Ubuntu 10.04.1 LTS (10.04) on /dev/sda1 Found Ubuntu 10.04.4 LTS (10.04) on /dev/sda10 Found Peppermint Two (2) on /dev/sda15 Found Ubuntu 10.10 (10.10) on /dev/sda16 Found Windows 7 (loader) on /dev/sda3 Found Ubuntu 11.04 (11.04) on /dev/sda5 Found Ubuntu 12.04.1 LTS (12.04) on /dev/sda6 Found Linux Mint 12 LXDE (12) on /dev/sda8 Found MS-DOS 5.x/6.x/Win3.1 on /dev/sdc1 If I press 'e' on boot startup of rc4 and manually change it to rc5 and ctrl-x, it comes up fine. I just cannot get grub.cfg to update such that rc4 is included. Thanks, Howard # DO NOT EDIT THIS FILE # It is automatically generated by grub-mkconfig using templates from /etc/grub.d and settings from /etc/default/grub # BEGIN /etc/grub.d/00_header if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="Windows 7 (loader) (on /dev/sda3)" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi END /etc/grub.d/00_header BEGIN /etc/grub.d/05_debian_theme insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 insmod jpeg if background_image /usr/share/peppermint/wallpapers/Peppermint.jpg; then set color_normal=light-gray/black set color_highlight=magenta/black else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi END /etc/grub.d/05_debian_theme BEGIN /etc/grub.d/10_linux_proxy menuentry "Peppermint, with Linux 3.6.0-030600rc4-generic" --class peppermint --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 linux /boot/vmlinuz-3.6.0-030600rc4-generic root=UUID=218e9f6f-c21e-4c50-90a5-5a40be639b66 ro initrd /boot/initrd.img-3.6.0-030600rc4-generic } END /etc/grub.d/10_linux_proxy BEGIN /etc/grub.d/30_os-prober_proxy menuentry "Peppermint, with Linux 3.6.0-030600rc4-generic (on /dev/sda15)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos15)' search --no-floppy --fs-uuid --set=root 21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 linux /boot/vmlinuz-3.6.0-030600rc4-generic root=UUID=21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 ro splash quiet splash vt.handoff=7 initrd /boot/initrd.img-3.6.0-030600rc4-generic } menuentry "Ubuntu, with Linux 3.0.0-24-generic (on /dev/sda10)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos10)' search --no-floppy --fs-uuid --set=root 6c9a0149-3045-4335-83fa-a2513ca3a250 linux /boot/vmlinuz-3.0.0-24-generic root=UUID=6c9a0149-3045-4335-83fa-a2513ca3a250 ro crashkernel=384M-2G:64M,2G-:128M splash initrd /boot/initrd.img-3.0.0-24-generic } menuentry "Ubuntu, with Linux 3.5.0-030500rc7-generic (on /dev/sda10)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos10)' search --no-floppy --fs-uuid --set=root 6c9a0149-3045-4335-83fa-a2513ca3a250 linux /boot/vmlinuz-3.5.0-030500rc7-generic root=UUID=6c9a0149-3045-4335-83fa-a2513ca3a250 ro crashkernel=384M-2G:64M,2G-:128M splash initrd /boot/initrd.img-3.5.0-030500rc7-generic } menuentry "Peppermint, with Linux 3.3.0-030300rc2-generic (on /dev/sda15)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos15)' search --no-floppy --fs-uuid --set=root 21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 linux /boot/vmlinuz-3.3.0-030300rc2-generic root=UUID=21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 ro splash quiet splash vt.handoff=7 initrd /boot/initrd.img-3.3.0-030300rc2-generic } menuentry "Ubuntu, with Linux 2.6.39-rc5-candela (on /dev/sda16)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos16)' search --no-floppy --fs-uuid --set=root 48fcb5ec-b51b-4afd-b0e5-a2aace66f6e1 linux /boot/vmlinuz-2.6.39-rc5-candela root=/dev/sda7 ro splash initrd /boot/initrd.img-2.6.39-rc5-candela } menuentry "Windows 7 (loader) (on /dev/sda3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set=root EA3EFABB3EFA7FBD chainloader +1 } menuentry "Ubuntu, with Linux 2.6.38-13-generic (on /dev/sda5)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root bcfe855e-a449-429d-b204-c667e129a4bd linux /boot/vmlinuz-2.6.38-13-generic root=UUID=bcfe855e-a449-429d-b204-c667e129a4bd ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-13-generic } menuentry "Ubuntu, with Linux 3.2.0-29-generic-pae (on /dev/sda6)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 linux /boot/vmlinuz-3.2.0-29-generic-pae root=UUID=369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-29-generic-pae } menuentry "Ubuntu, with Linux 3.2.0-23-generic-pae (on /dev/sda6)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic-pae } menuentry "Linux Mint 12 LXDE, 3.0.0-12-generic (/dev/sda8) (on /dev/sda8)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos8)' search --no-floppy --fs-uuid --set=root ccdc67ed-e81c-4f85-9b75-fe0c24c65bb8 linux /boot/vmlinuz-3.0.0-12-generic root=UUID=ccdc67ed-e81c-4f85-9b75-fe0c24c65bb8 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-3.0.0-12-generic } menuentry "MS-DOS 5.x/6.x/Win3.1 (on /dev/sdc1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root A8F0DE02F0DDD6A2 drivemap -s (hd0) ${root} chainloader +1 } END /etc/grub.d/30_os-prober_proxy BEGIN /etc/grub.d/40_custom This file provides an easy way to add custom menu entries. Simply type the menu entries you want to add after this comment. Be careful not to change the 'exec tail' line above. END /etc/grub.d/40_custom BEGIN /etc/grub.d/41_custom if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi END /etc/grub.d/41_custom

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • &lt;%: %&gt;, HtmlEncode, IHtmlString and MvcHtmlString

    - by Shaun
    One of my colleague and friend, Robin is playing and struggling with the ASP.NET MVC 2 on a project these days while I’m struggling with a annoying client. Since it’s his first time to use ASP.NET MVC he was meetings with a lot of problem and I was very happy to share my experience to him. Yesterday he asked me when he attempted to insert a <br /> element into his page he found that the page was rendered like this which is bad. He found his <br /> was shown as a part of the string rather than creating a new line. After checked a bit in his code I found that it’s because he utilized a new ASP.NET markup supported in .NET 4.0 – “<%: %>”. If you have been using ASP.NET MVC 1 or in .NET 3.5 world it would be very common that using <%= %> to show something on the page from the backend code. But when you do it you must ensure that the string that are going to be displayed should be Html-safe, which means all the Html markups must be encoded. Otherwise this might cause an XSS (cross-site scripting) problem. So that you’d better use the code like this below to display anything on the page. In .NET 4.0 Microsoft introduced a new markup to solve this problem which is <%: %>. It will encode the content automatically so that you will no need to check and verify your code manually for the XSS issue mentioned below. But this also means that it will encode all things, include the Html element you want to be rendered. So I changed his code like this and it worked well. After helped him solved this problem and finished a spreadsheet for my boring project I considered a bit more on the <%: %>. Since it will encode all thing why it renders correctly when we use “<%: Html.TextBox(“name”) %>” to show a text box? As you know the Html.TextBox will render a “<input name="name" id="name" type="text"/>” element on the page. If <%: %> will encode everything it should not display a text box. So I dig into the source code of the MVC and found some comments in the class MvcHtmlString. 1: // In ASP.NET 4, a new syntax <%: %> is being introduced in WebForms pages, where <%: expression %> is equivalent to 2: // <%= HttpUtility.HtmlEncode(expression) %>. The intent of this is to reduce common causes of XSS vulnerabilities 3: // in WebForms pages (WebForms views in the case of MVC). This involves the addition of an interface 4: // System.Web.IHtmlString and a static method overload System.Web.HttpUtility::HtmlEncode(object). The interface 5: // definition is roughly: 6: // public interface IHtmlString { 7: // string ToHtmlString(); 8: // } 9: // And the HtmlEncode(object) logic is roughly: 10: // - If the input argument is an IHtmlString, return argument.ToHtmlString(), 11: // - Otherwise, return HtmlEncode(Convert.ToString(argument)). 12: // 13: // Unfortunately this has the effect that calling <%: Html.SomeHelper() %> in an MVC application running on .NET 4 14: // will end up encoding output that is already HTML-safe. As a result, we're changing out HTML helpers to return 15: // MvcHtmlString where appropriate. <%= Html.SomeHelper() %> will continue to work in both .NET 3.5 and .NET 4, but 16: // changing the return types to MvcHtmlString has the added benefit that <%: Html.SomeHelper() %> will also work 17: // properly in .NET 4 rather than resulting in a double-encoded output. MVC developers in .NET 4 will then be able 18: // to use the <%: %> syntax almost everywhere instead of having to remember where to use <%= %> and where to use 19: // <%: %>. This should help developers craft more secure web applications by default. 20: // 21: // To create an MvcHtmlString, use the static Create() method instead of calling the protected constructor. The comment said the encoding rule of the <%: %> would be: If the type of the content is IHtmlString it will NOT encode since the IHtmlString indicates that it’s Html-safe. Otherwise it will use HtmlEncode to encode the content. If we check the return type of the Html.TextBox method we will find that it’s MvcHtmlString, which was implemented the IHtmlString interface dynamically. That is the reason why the “<input name="name" id="name" type="text"/>” was not encoded by <%: %>. So if we want to tell ASP.NET MVC, or I should say the ASP.NET runtime that the content is Html-safe and no need, or should not be encoded we can convert the content into IHtmlString. So another resolution would be like this. Also we can create an extension method as well for better developing experience. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace ShaunXu.Blogs.IHtmlStringIssue 8: { 9: public static class Helpers 10: { 11: public static MvcHtmlString IsHtmlSafe(this string content) 12: { 13: return MvcHtmlString.Create(content); 14: } 15: } 16: } Then the view would be like this. And the page rendered correctly.         Summary In this post I explained a bit about the new markup in .NET 4.0 – <%: %> and its usage. I also explained a bit about how to control the page content, whether it should be encoded or not. We can see the ASP.NET MVC gives us more points to control the web pages.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >