Search Results

Search found 518 results on 21 pages for 'young phillip'.

Page 20/21 | < Previous Page | 16 17 18 19 20 21  | Next Page >

  • CQRS - Should a Command try to create a "complex" master-detail entity?

    - by Simon Crabtree
    I've been reading Greg Young and Udi Dahan's thoughts on Command Query Responsibilty Separation and a lot of what I read strikes a chord with me. My domain (we track vehicles which are doing deliveries) has the concept of a Route which contains one or more Stops. I need my customers to be able to set these up in our system by calling a webservice, and then be able to retrieve information about a Route and how the vehicle is progressing. In the past I would have "cut-down" DTO classes which closely resemble my domain classes, and the customer would create a RouteDto with an array of StopDto(s), and call our CreateRoute webmethod, passing in the RouteDto. When they query our system by calling the GetRouteDetails method, I would return exactly the same objects to them. One of the appealing aspects of CQRS is that the RouteDto might have all manner of properties that the customer wants to query, but have no business setting when they create a Route. So I create a separate CreateRouteRequest class which is passed in when calling the CreateRoute "command", and a Route DTO class which gets returned as a query result. class Route{ string Reference; List<Stop> Stops; } But I need my customer to provide me with Route AND Stop details when they create a route. As I see it I could either... Give my CreateRouteRequest class a Stops(s) property which is an array of "something" representing the data they need to provide about each stop - but what do I call this class? It's not a Stop as that's what I'm calling the list of DTO inside my Route DTO, but I don't like "CreateStopRequest". I also wonder if I'm stuck in a CRUD mindset here thinking in terms of master-detail information and asking the customer to think like that too. class CreateRouteRequest{ string Reference; ... List<CreateStopRequest> Stops; } or They call CreateRoute, and then make a number of calls to an AddStopToRoute method. This feels a bit more "behavioural" but I'm going to lose the ability to treat creating a route including its stops as a single atomic command. If they create a Route and then try to add a Stop which fails due to some validation problem they're going to have a partially correct Route. The fact that I can't come up with a good name for the list of "StopCreationData" objects I'd be working with in option 1, makes me wonder if there's something I'm missing.

    Read the article

  • DCVS + hosting for a startup commercial multiplatform phone app

    - by AG
    I'm in lean startup mode, working on a simple phone app that will be published initially as a iThingy app and an Android app with, possibly, Blackberry and Symbian versions to follow. I'm about to go from no repository to needing a central repository that up to 4 very part-time resources will be sharing. Two of us have no version control background, one has used Subversion, and I've used most of the major centralized VCS systems. I'm not going to be pushing the technical limitations of any VCS for a long time; I'm sure that any of the major systems would work fine. And the hosting accounts I've looked at seem reasonable. So I'm really focussed on minimizing the downside risks. That is, I'd like to find a stable setup that is easy to learn in general, easy to use from Windows/Eclipse, and won't paint me into any obvious corners for the next 12 months or so. A quick search of the web had led me to consider the following pairs of DVCS and hosting service, with what I think I'm hearing as their strengths and weaknesses (for my purposes): Bazaar/Launchpad -- My initial choice since I need to get more familiar with this pair for the Google Summer of Code mentoring I'm doing. But, whatever the technical merits, a non-starter for me because they are purely open source, no private repositories plans to purchase that I can see. Git/GitHub -- Git: Fast, light, ultimately flexible, but relatively less Windows friendly, Eclipse plugin (eGit) available but relatively young, GitHub: widely used, pricing is fine Mercurial/BitBucket -- Mercurial: a little less flexible, a little more Windows friendly, Eclipse plugin seems a bit more mature, BitBucket: widely used, pricing is fine, includes a wiki and an issue tracker that we might be able to use instead of something like BaseCamp, at least for a while. Mercurial/BitBucket seem like the winning pair so far for my particular situation; at least two of us are definitely going to be working mostly from Eclipse on Windows and reducing my own learning curve is a priority. ;-) But I have two specific questions: 1) Am I wrong about Bazaar/Launchpad and is there a viable, secure way to use them for proprietary code? 2) Any reason to think that the Mecurial/Bitbucket pair will end up being a headache for my Mac developer, soon, or for Blackberry or Symbian developers a little later? ag

    Read the article

  • What would you suggest as a high school first language?

    - by ldigas
    Edit by OA: After reading some answers I'll just update the question a little. At first I put it a little bluntly, but some of those gave me some good arguments which have to be taken into consideration while making a stand on this one. (these are mostly picked up from comments and answers below). A few things to take into account: to many pupils this is a first programming language - at this stage most of them have trouble grasping a difference between data types, variable passing, ... and whatnot, less alone pointers and similar 'low level stuff' :) they will all have to pass this to get into next grade (well, big majority of them anyway) not all of them have computers at home, not all of them are willing to learn this, less alone interested in - so the concepts have to be taught on a finite time scale in school hours (as well as practice on computers) free literature is a bonus - the teacher will make some scripts and handaways, but still ... I wouldn't like to bear the parents with the burden of buying expensive literature (also, english is not a native language here ... and although they are all learning it, their ability to read it fluently is somewhat questionable) somebody gave an argument - "a language which does not get in the way of ideas" - good one accessibility on different platforms in not expecially important at this point - although most of the suggested ones are available on win as well as linux - not many macs in this part of europe (their prices are sky high for anything but specialised usage) I will check what are the licencing issues on ms express editions about using it massively in high schools for purposes like this - if someone has any info about this, please, do not be shy with it :) A friend of mine, informatics teacher - in EU it comes as something as junior cs teacher, in a local high school asked me what I thought about what should be the first language pupils should be taught? It is a technical school (a little more oriented towards mathematics than the gymnasium, but not computer oriented totally). So I'm asking you - what do you think should be the first language pupils are exposed to in highschool? They have been teaching Pascal so far, but she's not sure that's a good course. She thought about switching to C (which I resented; considering not all pupils have interests in programming, to start with, and should be taught something higher level since they are just gripping the idea of a loop and such ... for a start), I suggested python or ruby (preferably py since it handles all paradigms). What is your opinion on this one? I looked, but didn't find a similar question on SO, so if there is one, please just point me towards it. Edit: The assumption is that none of the pupils have been exposed to any programming in junior school. See also: What is the best way to teach young kids some basic programming concepts? Best ways to teach a beginner to program How and when do you teach a kid to code What is the easiest language to start with? High School Programming

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • Device drivers and Windows

    - by b-gen-jack-o-neill
    Hi, I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers. Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes. So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first. It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself. But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about. I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so? But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out. Please, help me understand this mess. I am really getting mad. Thanks.

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • Code Camp 2011 – Summary

    - by hajan
    Waiting whole twelve months to come this year’s Code Camp 2011 event was something which all Microsoft technologies (and even non-Microsoft techs.) developers were doing in the past year. Last year’s success was enough big to be heard and to influence everything around our developer community and beyond. Code Camp 2011 was nothing else but a invincible success which will remain in our memory for a long time from now. Darko Milevski (president of MKDOT.NET UG and SharePoint MVP) said something interesting at the event keynote that up to now we were looking at the past by saying what we did… now we will focus on the future and how to develop our community more and more in the future days, weeks, months and I hope so for many years… Even though it was held only two days ago (26th of November 2011), I already feel the nostalgia for everything that happened there and for the excellent time we have spent all together. ORGANIZED BY ENTHUSIASTS AND EXPERTS Code Camp 2011 was organized by number of community enthusiasts and experts who have unselfishly contributed with all their free time to make the best of this event. The event was organized by a known community group called MKDOT.NET User Group, name of a user group which is known not only in Macedonia, but also in many countries abroad. Organization mainly consists of software developers, technical leaders, team leaders in several known companies in Macedonia, as well as Microsoft MVPs. SPEAKERS There were 24 speakers at five parallel tracks. At Code Camp 2011 we had two groups of speakers: Professional Experts in various technologies and Student Speakers. The new interesting thing here is the Student Speakers, which draw attention a lot, especially to other students who were interested to see what their colleagues are going to speak about and how do they use Microsoft technologies in different coding scenarios and practices, in different topics. From the rest of the professional speakers, there were 7 Microsoft MVPs: Two ASP.NET/IIS MVPs, Two C# MVPs, and One MVP in SharePoint, SQL Server and Exchange Server. I must say that besides the MVP Speakers, who definitely did a great job as always… there were other excellent speakers as well, which were speaking on various technologies, such as: Web Development, Windows Phone Development, XNA, Windows 8, Games Development, Entity Framework, Event-driven programming, SOLID, SQLCLR, T-SQL, e.t.c. SESSIONS There were 25 sessions mainly all related to Microsoft technologies, but ranging from Windows 8, WP7, ASP.NET till Games Development, XNA and Event-driven programming. Sessions were going in five parallel tracks named as Red, Yellow, Green, Blue and Student track. Five presentations in each track, each with level 300 or 400. More info MY SESSION (ASP.NET MVC Best Practices) I must say that from the big number of speaking engagements I have had, this was one of my best performances and definitely I have set new records of attendees at my sessions and probably overall. I spoke on topic ASP.NET MVC Best Practices, where I have shown tips, tricks, guidelines and best practices on what to use and what to avoid by developing with one of the best web development frameworks nowadays, ASP.NET MVC. I had approximately 350+ attendees, the hall was full so that there was no room for staying at feet. Besides .NET developers, there were a lot of other technology oriented developers, who has also received the presentation very well and I really hope I gave them reason to think about ASP.NET as one of the best options for web development nowadays (if you ask me, it’s the best one ;-)). I have included 10 tips in using ASP.NET MVC each of them followed by a demo. Besides these 10 tips, I have briefly introduced the concept of ASP.NET MVC for those that haven’t been working with the framework and at the end some bonus tips. I must say there was lot of laugh for some funny sentences I have stated, like “If you code ASP.NET MVC, girls will love you more” – same goes for girls, only replace girls with boys :). [LINK TO SESSION WILL GO HERE, ONCE SESSIONS ARE AVAILABLE ON MK CODECAMP WEBSITE] VOLUNTEERS Without strong organization, such events wouldn’t be able to gather hundreds of attendees at one place and still stay perfectly organized to the smallest details, without dedicated organization and volunteers. I would like to dedicate this space in my blog to them and to say one big THANK YOU for supporting us before the event and during the whole day in the event. With such young and dedicated volunteers, we couldn’t achieve anything but great results. THANK YOU EVERYONE FOR YOUR CONTRIBUTION! NETWORKING One of the main reasons why we do such events is to gather all professionals in one place. Networking is what everyone wants because through this way of networking, we can meet incredible people in one place. It is amazing feeling to share your knowledge with others and exchange thoughts on various topics. Meet and talk to interesting people. I have had very special moments with many attendees especially after my presentation. Special Thank You to all of them who come to meet me in person, whether to ask a question, say congrats for my session or simply meet me and just smile :)… everything counts! Thank You! TWITTER During the event, twitter was one of the most useful event-wide communication tool where everyone could tweet with hash tag #mkcodecamp or #mkdotnet and say what he/she wants to say about the current state and happenings at that moment… In my next blog post I will list the top craziest tweets that were posted at this event… FUTURE OF MKDOT.NET Having such strong community around MKDOT.NET, the future seems very bright. The initial plans are to have sub-groups in several technologies, however all these sub-groups will belong to the MKDOT.NET UG which will be, somehow, the HEAD of these sub-groups. We are doing this to provide better divisions by technologies and organize ourselves better since our community is very big, around 500 members in MKDOT.NET.We will have five sub-groups:- Web User Group (Lead:Hajan Selmani - me)- Mobile User Group (Lead: Filip Kerazovski)- Visual C# User Group (Lead: Vekoslav Stefanovski)- SharePoint User Group (Lead: Darko Milevski)- Dynamics User Group (Lead: Vladimir Senih) SUMMARY Online registered attendees: ~1.200 Event attendees: ~800 Number of members in organization: 40+ Organized by: MKDOT.NET User Group Number of tracks: 5 Number of speakers: 24 Number of sessions: 25 Event official website: http://codecamp.mkdot.net Total number of sponsors: 20 Platinum Sponsors: Microsoft, INETA, Telerik Place held: FON University City and Country: Skopje, Macedonia THANK YOU FOR BEING PART OF THE BEST EVENT IN MACEDONIA, CODE CAMP 2011. Regards, Hajan

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • CodePlex Daily Summary for Thursday, April 15, 2010

    CodePlex Daily Summary for Thursday, April 15, 2010New ProjectsApplication Logging Repository (ALR): The ALR is a light-weight logging framework that allows applications to log events and exceptions to a central repository.Arkane.FileProperties.DSS: Arkane.FileProperties.Dss is a library for parsing the file header of a .DSS file (as used by Olympus digital dictaphone systems) to obtain time, v...B in conTrol project: This project enables controling log-in and locking your workstation automatically, identifyng you bluetooth.DarkBook: DarkBook is a personal library project.Direct2D for Microsoft .Net: Direct2D, DirectWrite and Windows Imaging wrappers for .Net. This library allows to access Direct2D, DirectWrite and Windows Imaging Windows API f...DJ Ware: DJ Ware is an extensible music player with plugin support and innovative features to organize and explore music files. It is developed with C#, WPF...gpsMe: gpsMe is a Windows Mobile 6.x mapping solution allowing to place the user on a personnalized map. The screen requirements are VGA or WVGA but, you ...jErrorLog: jErrorLog is an error logging component for use in DotNet 2.0 or later applications. It can log error messages to any of the following: database, e...KEMET_API: Java Library (open - source). This library is a help to study egyptian hieroglyphs.Meadow: A web site project for a Swedish floorball team called Slackers. Home page built with ASP.NET 2.0, ASP.NET AJAX and SQL Server 2005.Mustang Math: Mustang Math makes it easier for young children to practice basic math facts on the computer. No keyboard or mouse required - just say the answer!...Net.Formats.oEmbed: oEmbed format implementation in c#. oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows...Normlize O/R Mapper: Open source O/RM tool that participates with traditional inheritance object models as well as Hibernate/nHibernate style class shells. As I have t...N-Twill Twitter Client for VB.NET: Proyecto de cliente twitter hecho con la libreria TwitterVB2 y hecho en VB.net 2008.SIQM: Spatial Information Quality Management Toolset TIMETABLEASY Web: Under developmentTweetSharp: TweetSharp is a complete .NET library for micro-blogging platforms that allows you to write short and sweet expressions that fly to Twitter, Yammer...UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application inte...WinForm SharePoint Web Part Manager: The SharePoint Web Part Manager is a WinForm tool using the SharePoint object model that enables developers and power users to add, update, delete,...WoW Character Viewer: View your World of Warcraft character (or anyone else's character), using this application. Written using Visual Basic Express 2008, then ported t...Xrns2XMod: Xrns2XMod converts from Renoise format (xrns) to mod or xm, which are more compatible formats playable from xmplay or vlc.New ReleasesArkane.FileProperties.DSS: 1.0 stable release: Executables and merge module for 1.0. (See documentation.)Bluetooth Radar: Version 2.0: Add IrDA reference for Bluetooth sending using Obex Add Project icon Add Bluetooth detection mode (Auto close application is there is no blueto...BUtil: BUtil 5.0 Alpha: Backup tasks adding.... in progressChronos WPF: Chronos v1.0 RC 1: Chronos v1.0 RC 1. Development will be feature frozen after this release, only bug fixes will be allowed. Updated nRoute assembly to v0.4 (http:...clipShow: Version 2.5: Release that addresses the canonical syntax issues in search discoverd by Tschachim (thanks again!). Also, the play list and play all menu items s...DarkBook: DarkBook alpha: Hi, here comes the alpha version of Darkbook. It has all the functions already but is still in developing. I hope it's helpful for you, at least it...DirectQ: Release 1.8.3a: Improvements to 1.8.2, which will be shortly be removed. This replaces the original 1.8.3 release from earlier today which had some late-breaking ...Effect Custom Tool for Visual Studio: Effect Custom Tool v1.1: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Folder Bookmarks: Folder Bookmarks 1.4.3: This is the latest version of Folder Bookmarks (1.4.3), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...gpsMe: gpsMe v0.3: Required Hardware Windows Mobile 6 .Net Compact Framework 3.5 integrated gps device VGA or WVGA screen (normally works on others)IST435: Lab 4 - Enterprise Level CMS with DotNetNuke: Lab 4 - Enterprise Level CMS with DotNetNukeThis is the "starter kit" that you must base your Lab 4 on. This lab must be completed in-class.Mouse Jiggler: MouseJiggle-1.1: 1.1 release of Mouse Jiggler, now with x64 compatibility and the ability to start jiggling on run with the --jiggle or -j command-line switch.Mustang Math: MustangMath.exe: This is a quick and dirty "0.1" prototype to demonstrate the speech recognition idea. It starts asking you questions automatically on launch and k...MvcContrib: a Codeplex Foundation project: 2.0.36.0 for MVC2 (RTW): Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcC...Nito.LINQ: Beta (v0.3): New features for this release: Several new supported platforms (see below). PDBs that are source-indexed to the appropriate CodePlex changeset. ...OpenIdPortableArea: 0.1.0.2 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...PokeIn Comet Ajax Library: PokeIn Sample with Library v0.2: New version of PokeIn library with sample. v0.2 There are new features in this release and no bug detected yet.Project Tru Tiên: Elements-test V1-fix (v2): Là EL test được fix tiếp theo bản fix V1, tạm gọi đây là bản fix V2 của ELtest Trong bản fix này EL được fix thêm vụ Quest, Quest chỉnh sửa đúng t...Rule 18 - Love your clipboard: Rule 18: This is the third public beta for the first version of Rule 18. This version has been updated to support Visual Studio 2010 RTM and .NET 4.0 RTM. ...SevenZipSharp: SevenZipSharp 0.62: Added: Extraction from SFX archives. Now it is possible to unrar RAR self-extractors, unzip ZIP self-extractors, etc. Extraction from DOC, XLS, (...SharePoint Labs: SPLab3001A-FRA-Level200: SPLab3001A-FRA-Level200 This SharePoint Lab will teach the persistence object layer that SharePoint uses to centraly store configuration data and o...TTXPathNavigator: TTXPathNavigator for VS2010: Version for Visual Studio 2010turing machine simulator: SDS: SDS documentVecDraw: VecDraw_0.2.25: Alpha release for test purposesWinForm SharePoint Web Part Manager: Beta 1: First release of the WinForm SharePoitn web part manager toolXrns2XMod: Xrns2XMod 0.5.1: Mod and XM conversion format - No sample data conversion at momentZip Solution: ZipSolution 5.3: Features: 1. Added WaitMsec for visual studio support with getting access to files in post build event; 2. Added ShowTextInToolbars to app.config ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRasFacebook Developer Toolkit

    Read the article

  • How To Remove People and Objects From Photographs In Photoshop

    - by Eric Z Goodnight
    You might think that it’s a complicated process to remove objects from photographs. But really Photoshop makes it quite simple, even when removing all traces of a person from digital photographs. Read on to see just how easy it is. Photoshop was originally created to be an image editing program, and it excels at it. With hardly any Photoshop experience, any beginner can begin removing objects or people from their photos. Have some friends that photobombed an otherwise great pic? Tell them to say their farewells, because here’s how to get rid of them with Photoshop! Tools for Removing Objects Removing an object is not really “magical” work. Your goal is basically to cover up the information you don’t want in an image with information you do want. In this sample image, we want to remove the cigar smoking man, and leave the geisha. Here’s a couple of the tools that can be useful to work with when attempting this kind of task. Clone Stamp and Pattern Stamp Tool: Samples parts of your image from your background, and allows you to paint into your image with your mouse or stylus. Eraser and Brush Tools: Paint flat colors and shapes, and erase cloned layers of image information. Basic, down and dirty photo editing tools. Pen, Quick Selection, Lasso, and Crop tools: Select, isolate, and remove parts of your image with these selection tools. All useful in their own way. Some, like the pen tool, are nightmarishly tough on beginners. Remove a Person with the Clone Stamp Tool (Video) The video above uses the Clone Stamp tool to sample and paint with the background texture. It’s a simple tool to use, although it can be confusing, possibly counter-intuitive. Here’s some pointers, in addition to the video above. Select shortcut key to choose the Clone tool stamp from the Tools Panel. Always create a copy of your background layer before doing heavy edits by right clicking on the background in your Layers Panel and selecting “Duplicate.” Hold with the Clone Tool selected, and click anywhere in your image to sample that area. When you’re sampling an area, your cursor is “Aligned” with your sample area. When you paint, your sample area moves. You can turn the “Aligned” setting off by clicking the in the Options Panel at the top of your screen if you want. Change your brush size and hardness as shown in the video by right-clicking in your image. Use your lasso to copy and paste pieces of your image in order to cover up any parts that seem appropriate. Photoshop Magic with the “Content-Aware Fill” One of the hallmark features of CS5 is the “Content-Aware Fill.” Content aware fill can be an excellent shortcut to removing objects and even people in Photoshop, but it is somewhat limited, and can get confused. Here’s a basic rundown on how it works. Select an object using your Lasso tool, shortcut key . The Lasso works fine as this selection can be rough. Navigate to Edit > Fill, and select “Content-Aware,” as illustrated above, from the pull-down menu. It’s surprisingly simple. After some processing, Photoshop has done the work of removing the object for you. It takes a few moments, and it is not perfect, so be prepared to touch it up with some Copy-Paste, or some Clone stamp action. Content Aware Fill Has Its Limits Keep in mind that the Content Aware Fill is meant to be used with other techniques in mind. It doesn’t always perform perfectly, but can give you a great starting point. Take this image for instance. It is actually plausible to hide this figure and make this image look like he was never there at all. With a selection made with the Lasso tool, navigate to Edit > Fill and select “Content Aware” again. The result is surprisingly good, but as you can see, worthy of some touch up. With a result like this one, you’ll have to get your hands dirty with copy-paste to create believable lines in the background. With many photographs, Content Aware Fill will simply get confused and give you results you won’t be happy with. Additional Touch Up for Bad Background Textures with the Pattern Stamp Tool For the perfectionist, cleaning up the lumpy looking textures that the Clone Stamp can leave is fairly simple using the Pattern Stamp Tool. Sample an piece of your image with your Marquee Tool, shortcut key . Navigate to Edit > Define Pattern to create a new Pattern from your selection. Click OK to continue. Click and hold down on the Clone Stamp tool in your Tools Panel until you can select the Pattern Stamp Tool. Pick your new pattern from the Options at the top of your screen, in the Options Panel. Then simply right click in your image in order to pick as soft a brush as possible to paint with. Paint into your image until your background is as smooth as you want it to be, making your painted out object more and more invisible. If you get lines from your repeated texture, experiment turning the on and off and paint over them. In addition to this, simple use of the Crop Tool, shortcut , can recompose an image, making it look as if it never had another object in it at all. Combine these techniques to find a method that works best for your images. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: Geisha Kyoto Gion by Todd Laracuenta via Wikipedia, used under Creative Commons. Moai Rano raraku by Aurbina, in Public Domain. Chris Young visits Wrigley by TonyTheTiger, via Wikipedia, used under Creative Commons. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • WhatsApp &amp; Tasker for Android &ndash; Read &amp; Write messages

    - by Shaurya Anand
    So, I finally gave up on all my previous the Microsoft Mobile/Phone OS devices and made my switch to Android this year. I am using my Samsung Galaxy Note GT-N7000 with CyanogenMod 9.1.0 (http://get.cm/get/jenkins/7086/cm-9.1.0-n7000.zip) and ClockworkMod 6.0.1.2 (http://download2.clockworkmod.com/recoveries/recovery-clockwork-6.0.1.2-n7000.zip) since August this year and I am so happy with the performance and the flexibility it offers me. As a software developer by profession, I would expect most of my gadget to be highly customizable and programmable (one time or at intervals) to suit my needs as close as it can. I was introduced to Automation for Android – Tasker (https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en) via reddit (http://www.reddit.com/r/tasker) and the word ‘automation’ was enough for me to dive right into this app. Only automation that I did earlier was switching profiles depending on location on there phones. And now, just imagine a complete set of possibilities that can be automate on the phone or via the phone. I did my research and found a couple of other tools that do the same/as close as what Tasker can do and few of them are even free. There’s one even by Microsoft called on{X} (https://play.google.com/store/apps/details?id=com.microsoft.onx.app&hl=en). Microsoft’s on{X} really caught my eye. You can write code for your phone on the web application by them, deploy it on your phone and even trace the flow all using your PC. Really brilliant, I love the fact that it’s all JavaScript. Here comes the but, it is still very very young and it’s policy of accessing my News Feed on Facebook is not something that I can not digest. On{X} is good, but as I said earlier, the API is not very mature and hence, I gave up on it. I bought Tasker, the best 5,00 € I spent in ages and I want to talk about it in this post. I am still a “noob” while operating this tool, but I tried my shot at automating WhatsApp (https://play.google.com/store/apps/details?id=com.whatsapp&hl=en), a popular messenger for various platform. The requirement for the automation is that, if I send a WhatsApp ‘wru’ message to the phone, it should respond back giving the location and battery level of my phone. It could be useful, if you like to locate your misplaced phone or automatically reply to your partner/friend, honestly, I don’t know what you will use it - through this post, I am just introducing automating WhatsApp using Tasker. Before we begin, the following script only works when your phone is rooted as we will be accessing the WhatsApp database and type some special characters like ‘:’. Let’s follow the code line by line: Profile:         Location request from XYZ. (12) // Name of your profile. Event:         Notification [ Owner Application:WhatsApp Title:* ] // When a new notification comes from WhatsApp, this event is fired. Read the end note, if you face problems with Chrome app after enabling Tasker accessibility. Enter:         A1: Run Shell [ Command:sqlite3 // We will access the WhatsApp database and check if the message comes from designated phone number or not. We mustn’t reply to every message.                 /data/data/com.whatsapp/databases/msgstore.db "SELECT _id, data FROM                  messages WHERE key_from_me='0' AND key_remote_jid LIKE '%XXXXXXXXXXX%' // Replace XXXXXXXXXXX with the phone number of your message sender.                 ORDER BY _id DESC LIMIT 1;" Timeout (Seconds):10 Use Root:On Store // I made a timeout for 10 seconds, if in case WhatsApp is busy accessing the database.                 Result In:%WHATSAPP_CURRREQ ] // Store the read Id and the last message on to the variable %WHATSAPP_CURRREQ         A2: If [ %WHATSAPP_CURRREQ ~R .*[wW][rR][uU].* ] // Check if the pattern of the message is correct and we are all set to send the location.                 A3: If [ %WHATSAPP_CURRREQ !~ %WHATSAPP_LASTREQ ] // Verify that the message is different from the last request. Remember every message has a unique Id.                         A4: Notify [ Title:WhatsApp location request... Text:Sending location // Just a notification that the location message is being prepared.                                 to Krati Gupta... Icon:<icon> Number:0 Permanent:On Priority:3 ] // Make a note it is a permanent notification, we will clear it later.                         A5: Secure Settings [ Configuration:Pattern Lock Disabled // I am disabling the pattern lock, that I use using the plugin Secure Settings.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure // You can download the plugin from here: https://play.google.com/store/apps/details?id=com.intangibleobject.securesettings.plugin&hl=en                                 Settings ]                         A6: Secure Settings [ Configuration:Keyguard Disabled // Disable the keygaurd, it is useful, when your phone is on lock and you want to automate everything, even the typing.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A7: Secure Settings [ Configuration:GPS Enabled // Pretty clear, turn on the GPS and get location at A8                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A8: AutoShortcut [ Configuration:WhatsApp: Some One // I am using AutoShortcut plugin (https://play.google.com/store/apps/details?id=com.joaomgcd.autoshortcut) to start WhatsApp with the indented recipient.                                 Package:com.joaomgcd.autoshortcut Name:AutoShortcut ] // Replace Some One, actually choose it from the plugin, the right recipient.                         A9: Get Location [ Source:Any Timeout (Seconds):30 Continue Task // I am getting the location, timeout is 30 seconds, adjust it accordingly.                                 Immediately:Off Keep Tracking:Off ]                         A10: Secure Settings [ Configuration:Screen Dim // Now, this extension of the plugin Secure Settings, wakes your device so that you can type out the string on the WhatsApp app.                                 5 Seconds Package:com.intangibleobject.securesettings.plugin                                 Name:Secure Settings ]                         A11: Run Shell [ Command:input text // Now, I am using the shell script to type the text to the window, because the ‘:’ while not be typed from the Type task in Tasker.                                 LOCATION:maps.google.com/maps?q=%LOC Timeout (Seconds):0 Use Root:On // And also, this is way faster, but remember you need root for this, not for the other way of typing.                                 Store Result In: ]                         A12: Dpad [ Button:Right Repeat Times:1 ] // Focus the Send button                         A13: Dpad [ Button:Press Repeat Times:1 ] // And press it.                         A14: Dpad [ Button:Left Repeat Times:1 ] // Get back to the typing box.                         A15: Run Shell [ Command:input text LOCATION_ACCURACY:%LOCACC Timeout                                 (Seconds):0 Use Root:On Store Result In: ]                         A16: Dpad [ Button:Right Repeat Times:1 ]                         A17: Dpad [ Button:Press Repeat Times:1 ]                         A18: Dpad [ Button:Left Repeat Times:1 ]                         A19: Run Shell [ Command:input text BATTERY_LEVEL:%BATT% Timeout // I am adding Battery level in my case as well.                                 (Seconds):0 Use Root:On Store Result In: ]                         A20: Dpad [ Button:Right Repeat Times:1 ]                         A21: Dpad [ Button:Press Repeat Times:1 ]                         A22: Variable Set [ Name:%WHATSAPP_LASTREQ To:%WHATSAPP_CURRREQ Do // And now, we say, request is done.                                 Maths:Off Append:Off ]                         A23: Button [ Button:Back ] // I am exiting the WhatsApp nicely and not killing it. If you are the murderer kind, kill it, just know, you don’t have any place in the heaven.                         A24: Button [ Button:Back ]                         A25: Notify Cancel [ Title: Warn Not Exist:Off ] // Remove the permanent notification.                         A26: Notify [ Title:WhatsApp location request Text:Location sent // Make a temporary notification, and say, location is sent.                                 successfully. Icon:<icon> Number:0 Permanent:Off Priority:3 ]                                                         A27: Secure Settings [ Configuration:GPS Disabled // Disable all the horrible things we turned on earlier.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A28: Secure Settings [ Configuration:Pattern Lock Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A29: Secure Settings [ Configuration:Keyguard Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                 A30: End If         A31: End If Download this Task from here: http://db.tt/9vRmbhyb That’s it in the above small example – you can read/write messages from/to WhatsApp app. I am using n7000-cm9.1-cwr6. Oh yea, and if you are having the Talkback auto enabled for Chrome browser, you need to turn Off the Web scripts to run. Tasker is amazing, I have automated a lot of tasks using this tool. I will share a few none generic ones with you in my coming post here.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • A Bite With No Teeth&ndash;Demystifying Non-Compete Clauses

    - by D'Arcy Lussier
    *DISCLAIMER: I am not a lawyer and this post in no way should be considered legal advice. I’m also in Canada, so references made are to Canadian court cases. I received a signed letter the other day, a reminder from my previous employer about some clauses associated with my employment and entry into an employee stock purchase program. So since this is in effect for the next 12 months, I guess I’m not starting that new job tomorrow. I’m kidding of course. How outrageous, how presumptuous, pompous, and arrogant that a company – any company – would actually place these conditions upon an employee. And yet, this is not uncommon. Especially in the IT industry, we see time and again similar wording in our employment agreements. But…are these legal? Is there any teeth behind the threat of the bite? Luckily, the answer seems to be ‘No’. I want to highlight two cases that support this. The first is Lyons v. Multari. In a nutshell, Dentist hires younger Dentist to be an associate. In their short, handwritten agreement, a non-compete clause was written stating “Protective Covenant. 3 yrs. – 5mi” (meaning you can’t set up shop within 5 miles for 3 years). Well, the young dentist left and did start an oral surgery office within 5 miles and within 3 years. Off to court they go! The initial judge sided with the older dentist, but on appeal it was overturned. Feel free to read the transcript of the decision here, but let me highlight one portion from section [19]: The general rule in most common law jurisdictions is that non-competition clauses in employment contracts are void. The sections following [19] explain further, and discuss Elsley v. J.G. Collins Insurance Agency Ltd. and its impact on Canadian law in this regard. The second case is Winnipeg Livestock Sales Ltd. v. Plewman. Desmond Plewman is an auctioneer, and worked at Winnipeg Livestock Sales. Part of his employment agreement was that he could not work for a competitor for 18 months if he left the company. Well, he left, and took up an important role in a competing company. The case went to court and as with Lyons v. Multari, the initial judge found in favour of the plaintiffs. Also as in the first case, that was overturned on appeal. Again, read through the transcript of the decision, but consider section [28]: In other words, even though Plewman has a great deal of skill as an auctioneer, Winnipeg Livestock has no proprietary interest in his professional skill and experience, even if they were acquired during his time working for Winnipeg Livestock.  Thus, Winnipeg Livestock has the burden of establishing that it has a legitimate proprietary interest requiring protection.  On this key question there is little evidence before the Court.  The record discloses that part of Plewman’s job was to “mingle with the … crowd” and to telephone customers and prospective customers about future prospects for the sale of livestock.  It may seem reasonable to assume that Winnipeg Livestock has a legitimate proprietary interest in its customer connections; but there is no evidence to indicate that there is any significant degree of “customer loyalty” in the business, as opposed to customers making choices based on other considerations such as cost, availability and the like. So are there any incidents where a non-compete can actually be valid? Yes, and these are considered “exceptional” cases, meaning that the situation meets certain circumstances. Michael Carabash has a great blog series discussing the above mentioned cases as well as the difference between a non-compete and non-solicit agreement. He talks about the exceptional criteria: In summary, the authorities reveal that the following circumstances will generally be relevant in determining whether a case is an “exceptional” one so that a general non-competition clause will be found to be reasonable: - The length of service with the employer. - The amount of personal service to clients. - Whether the employee dealt with clients exclusively, or on a sustained or     recurring basis. - Whether the knowledge about the client which the employee gained was of a   confidential nature, or involved an intimate knowledge of the client’s   particular needs, preferences or idiosyncrasies. - Whether the nature of the employee’s work meant that the employee had   influence over clients in the sense that the clients relied upon the employee’s   advice, or trusted the employee. - If competition by the employee has already occurred, whether there is   evidence that clients have switched their custom to him, especially without   direct solicitation. - The nature of the business with respect to whether personal knowledge of   the clients’ confidential matters is required. - The nature of the business with respect to the strength of customer loyalty,   how clients are “won” and kept, and whether the clientele is a recurring one. - The community involved and whether there were clientele yet to be exploited   by anyone. I close this blog post with a final quote, one from Zvulony & Co’s blog post on this subject. Again, all of this is not official legal advice, but I think we can see what all these sources are pointing towards. To answer my earlier question, there’s no teeth behind the threat of the bite. In light of this list, and the decisions in Lyons and Orlan, it is reasonably certain that in most employment situations a non-competition clause will be ineffective in protecting an employer from a departing employee who wishes to compete in the same business. The Courts have been relatively consistent in their position that if a non-solicitation clause can protect an employer’s interests, then a non-competition clause is probably unreasonable. Employers (or their solicitors) should avoid the inclination to draft restrictive covenants in broad, catch-all language. Or in other words, when drafting a restrictive covenant – take only what you need! D

    Read the article

  • Clearing C#'s WebBrowser control's cookies for all sites WITHOUT clearing for IE itself

    - by Helgi Hrafn Gunnarsson
    Hail StackOverflow! The short version of what I'm trying to do is in the title. Here's the long version. I have a bit of a complex problem which I'm sure I will receive a lot of guesses as a response to. In order to keep the well-intended but unfortunately useless guesses to a minimum, let me first mention that the solution to this problem is not simple, so simple suggestions will unfortunately not help at all, even though I appreciate the effort. C#'s WebBrowser component is fundamentally IE itself so solutions with any sorts of caveats will almost certainly not work. I need to do exactly what I'm trying to do, and even a seemingly minor caveat will defeat the purpose completely. At the risk of sounding arrogant, I need assistance from someone who really has in-depth knowledge about C#'s WebBrowser and/or WinInet and/or how to communicate with Windows's underlying system from C#... or how to encapsulate C++ code in C#. That said, I don't expect anyone to do this for me, and I've found some promising hints which are explained later in this question. But first... what I'm trying to achieve is this. I have a Windows.Forms component which contains a WebBrowser control. This control needs to: Clear ALL cookies for ALL websites. Visit several websites, one after another, and record cookies and handle them correctly. This part works fine already so I don't have any problems with this. Rinse and repeat... theoretically forever. Now, here's the real problem. I need to clear all those cookies (for any and all sites), but only for the WebBrowser control itself and NOT the cookies which IE proper uses. What's fundamentally wrong with this approach is of course the fact that C#'s WebBrowser control is IE. But I'm a stubborn young man and I insist on it being possible, or else! ;) Here's where I'm stuck at the moment. It is quite simply impossible to clear all cookies for the WebBrowser control programmatically through C# alone. One must use DllImport and all the crazy stuff that comes with it. This chunk works fine for that purpose: [DllImport("wininet.dll", SetLastError = true)] private static extern bool InternetSetOption(IntPtr hInternet, int dwOption, IntPtr lpBuffer, int lpdwBufferLength); And then, in the function that actually does the clearing of the cookies: InternetSetOption(IntPtr.Zero, INTERNET_OPTION_END_BROWSER_SESSION, IntPtr.Zero, 0); Then all the cookies get cleared and as such, I'm happy. The program works exactly as intended, aside from the fact that it also clears IE's cookies, which must not be allowed to happen. The problem is that this also clears the cookies for IE proper, and I can't have that happen. From one fellow StackOverflower (if that's a word), Sheng Jiang proposed this to a different problem in a comment, but didn't elaborate further: "If you want to isolate your application's cookies you need to override the Cache directory registry setting via IDocHostUIHandler2::GetOverrideKeyPath" I've looked around the internet for IDocHostUIHandler2 and GetOverrideKeyPath, but I've got no idea of how to use them from C# to isolate cookies to my WebBrowser control. My experience with the Windows registry is limited to RegEdit (so I understand that it's a tree structure with different data types but that's about it... I have no in-depth knowledge of the registry's relationship with IE, for example). Here's what I dug up on MSDN: IDocHostUIHandler2 docs: http://msdn.microsoft.com/en-us/library/aa753275%28VS.85%29.aspx GetOverrideKeyPath docs: http://msdn.microsoft.com/en-us/library/aa753274%28VS.85%29.aspx I think I know roughly what these things do, I just don't know how to use them. So, I guess that's it! Any help is greatly appreciated.

    Read the article

  • nhibernate configure and buildsessionfactory time

    - by davidsleeps
    Hi, I'm using Nhibernate as the OR/M tool for an asp.net application and the startup performance is really frustrating. Part of the problem is definitely me in my lack of understanding but I've tried a fair bit (understanding is definitely improving) and am still getting nowhere. Currently ANTS profiler has that the Configure() takes 13-18 seconds and the BuildSessionFActory() as taking about 5 seconds. From what i've read, these times might actually be pretty good, but they were generally talking about hundreds upon hundreds of mapped entities...this project only has 10. I've combined all the mapping files into a single hbm mapping file and this did improve things but only down to the times mentioned above... I guess, are there any "Traps for young players" that are regularly missed...obvious "I did this/have you enabled that/exclude file x/mark file y as z" etc... I'll try the serialize the configuration thing to avoid the Configure() stage, but I feel that part shouldn't be that long for that amount of entities and so would essentially be hiding a current problem... I will post source code or configuration if necessary, but I'm not sure what to put in really... thanks heaps! edit (more info) I'll also add that once this is completed, each page is extremely quick... configuration code- hibernate.cfg.xml <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate" /> </configSections> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string_name">MyAppDEV</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider, NHibernate.Caches.SysCache</property> <property name="cache.use_second_level_cache">true</property> <property name="show_sql">false</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <property name="current_session_context_class">managed_web</property> <mapping assembly="MyApp.Domain"/> </session-factory> </hibernate-configuration> </configuration> My SessionManager class which is bound and unbound in a HttpModule for each request Imports NHibernate Imports NHibernate.Cfg Public Class SessionManager Private ReadOnly _sessionFactory As ISessionFactory Public Shared ReadOnly Property SessionFactory() As ISessionFactory Get Return Instance._sessionFactory End Get End Property Private Function GetSessionFactory() As ISessionFactory Return _sessionFactory End Function Public Shared ReadOnly Property Instance() As SessionManager Get Return NestedSessionManager.theSessionManager End Get End Property Public Shared Function OpenSession() As ISession Return Instance.GetSessionFactory().OpenSession() End Function Public Shared ReadOnly Property CurrentSession() As ISession Get Return Instance.GetSessionFactory().GetCurrentSession() End Get End Property Private Sub New() Dim configuration As Configuration = New Configuration().Configure() _sessionFactory = configuration.BuildSessionFactory() End Sub Private Class NestedSessionManager Friend Shared ReadOnly theSessionManager As New SessionManager() End Class End Class edit 2 (log4net results) will post bits that have a portion of time between them and will cut out the rest... 2010-03-30 23:29:40,898 [4] INFO NHibernate.Cfg.Environment [(null)] - Using reflection optimizer 2010-03-30 23:29:42,481 [4] DEBUG NHibernate.Cfg.Configuration [(null)] - dialect=NHibernate.Dialect.MsSql2005Dialect ... 2010-03-30 23:29:42,501 [4] INFO NHibernate.Cfg.Configuration [(null)] - Mapping resource: MyApp.Domain.Mappings.hbm.xml 2010-03-30 23:29:43,342 [4] INFO NHibernate.Dialect.Dialect [(null)] - Using dialect: NHibernate.Dialect.MsSql2005Dialect 2010-03-30 23:29:50,462 [4] INFO NHibernate.Cfg.XmlHbmBinding.Binder [(null)] - Mapping class: ... 2010-03-30 23:29:51,353 [4] DEBUG NHibernate.Connection.DriverConnectionProvider [(null)] - Obtaining IDbConnection from Driver 2010-03-30 23:29:53,136 [4] DEBUG NHibernate.Connection.ConnectionProvider [(null)] - Closing connection

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Developing a SQL Server Function in a Test-Harness.

    - by Phil Factor
    /* Many times, it is a lot quicker to take some pain up-front and make a proper development/test harness for a routine (function or procedure) rather than think ‘I’m feeling lucky today!’. Then, you keep code and harness together from then on. Every time you run the build script, it runs the test harness too.  The advantage is that, if the test harness persists, then it is much less likely that someone, probably ‘you-in-the-future’  unintentionally breaks the code. If you store the actual code for the procedure as well as the test harness, then it is likely that any bugs in functionality will break the build rather than to introduce subtle bugs later on that could even slip through testing and get into production.   This is just an example of what I mean.   Imagine we had a database that was storing addresses with embedded UK postcodes. We really wouldn’t want that. Instead, we might want the postcode in one column and the address in another. In effect, we’d want to extract the entire postcode string and place it in another column. This might be part of a table refactoring or int could easily be part of a process of importing addresses from another system. We could easily decide to do this with a function that takes in a table as its parameter, and produces a table as its output. This is all very well, but we’d need to work on it, and test it when you make an alteration. By its very nature, a routine like this either works very well or horribly, but there is every chance that you might introduce subtle errors by fidding with it, and if young Thomas, the rather cocky developer who has just joined touches it, it is bound to break.     right, we drop the function we’re developing and re-create it. This is so we avoid the problem of having to change CREATE to ALTER when working on it. */ IF EXISTS(SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’                                      and schema_name(schema_ID)=‘Dbo’)     DROP FUNCTION dbo.ExtractPostcode GO   /* we drop the user-defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘AddressesWithPostCodes’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.AddressesWithPostCodes GO /* we drop the user defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO   /* and now create the table type that we can use to pass the addresses to the function */ CREATE TYPE AddressesWithPostCodes AS TABLE ( AddressWithPostcode_ID INT IDENTITY PRIMARY KEY, –because they work better that way! Address_ID INT NOT NULL, –the address we are fixing TheAddress VARCHAR(100) NOT NULL –The actual address ) GO CREATE TYPE OutputFormat AS TABLE (   Address_ID INT PRIMARY KEY, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode )   GO CREATE FUNCTION ExtractPostcode(@AddressesWithPostCodes AddressesWithPostCodes READONLY)  /** summary:   > This Table-valued function takes a table type as a parameter, containing a table of addresses along with their integer IDs. Each address has an embedded postcode somewhere in it but not consistently in a particular place. The routine takes out the postcode and puts it in its own column, passing back a table where theinteger key is accompanied by the address without the (first) postcode and the postcode. If no postcode, then the address is returned unchanged and the postcode will be a blank string Author: Phil Factor Revision: 1.3 date: 20 May 2014 example:      – code: returns:   > Table of  Address_ID, TheAddress and ThePostCode. **/     RETURNS @FixedAddresses TABLE   (   Address_ID INT, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode   ) AS – body of the function BEGIN DECLARE @BlankRange VARCHAR(10) SELECT  @BlankRange = CHAR(0)+‘- ‘+CHAR(160) INSERT INTO @FixedAddresses(Address_ID, TheAddress, ThePostCode) SELECT Address_ID,          CASE WHEN start>0 THEN REPLACE(STUFF([Theaddress],start,matchlength,”),‘  ‘,‘ ‘)             ELSE TheAddress END            AS TheAddress,        CASE WHEN Start>0 THEN SUBSTRING([Theaddress],start,matchlength-1) ELSE ” END AS ThePostCode FROM (–we have a derived table with the results we need for the chopping SELECT MAX(PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)) AS start,         MAX( CASE WHEN PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)>0 THEN TheLength ELSE 0 END) AS matchlength,        MAX(TheAddress) AS TheAddress,        Address_ID FROM (SELECT –first the match, then the length. There are three possible valid matches         ‘%['+@BlankRange+'][A-Z][0-9] [0-9][A-Z][A-Z]%’, 7 –seven character postcode       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 8       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 9)      AS f(Matched,TheLength) CROSS JOIN  @AddressesWithPostCodes GROUP BY [address_ID] ) WORK; RETURN END GO ——————————-end of the function————————   IF NOT EXISTS (SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’)   BEGIN   RAISERROR (‘There was an error creating the function.’,16,1)   RETURN   END   /* now the job is only half done because we need to make sure that it works. So we now load our sample data, making sure that for each Sample, we have what we actually think the output should be. */ DECLARE @InputTable AddressesWithPostCodes INSERT INTO  @InputTable(Address_ID,TheAddress) VALUES(1,’14 Mason mews, Awkward Hill, Bibury, Cirencester, GL7 5NH’), (2,’5 Binney St      Abbey Ward    Buckinghamshire      HP11 2AX UK’), (3,‘BH6 3BE 8 Moor street, East Southbourne and Tuckton W     Bournemouth UK’), (4,’505 Exeter Rd,   DN36 5RP Hawerby cum BeesbyLincolnshire UK’), (5,”), (6,’9472 Lind St,    Desborough    Northamptonshire NN14 2GH  NN14 3GH UK’), (7,’7457 Cowl St, #70      Bargate Ward  Southampton   SO14 3TY UK’), (8,”’The Pippins”, 20 Gloucester Pl, Chirton Ward,   Tyne & Wear   NE29 7AD UK’), (9,’929 Augustine lane,    Staple Hill Ward     South Gloucestershire      BS16 4LL UK’), (10,’45 Bradfield road, Parwich   Derbyshire    DE6 1QN UK’), (11,’63A Northampton St,   Wilmington    Kent   DA2 7PP UK’), (12,’5 Hygeia avenue,      Loundsley Green WardDerbyshire    S40 4LY UK’), (13,’2150 Morley St,Dee Ward      Dumfries and Galloway      DG8 7DE UK’), (14,’24 Bolton St,   Broxburn, Uphall and Winchburg    West Lothian  EH52 5TL UK’), (15,’4 Forrest St,   Weston-Super-Mare    North Somerset       BS23 3HG UK’), (16,’89 Noon St,     Carbrooke     Norfolk       IP25 6JQ UK’), (17,’99 Guthrie St,  New Milton    Hampshire     BH25 5DF UK’), (18,’7 Richmond St,  Parkham       Devon  EX39 5DJ UK’), (19,’9165 laburnum St,     Darnall Ward  Yorkshire, South     S4 7WN UK’)   Declare @OutputTable  OutputFormat  –the table of what we think the correct results should be Declare @IncorrectRows OutputFormat –done for error reporting   –here is the table of what we think the output should be, along with a few edge cases. INSERT INTO  @OutputTable(Address_ID,TheAddress, ThePostcode)     VALUES         (1, ’14 Mason mews, Awkward Hill, Bibury, Cirencester, ‘,‘GL7 5NH’),         (2, ’5 Binney St   Abbey Ward    Buckinghamshire      UK’,‘HP11 2AX’),         (3, ’8 Moor street, East Southbourne and Tuckton W    Bournemouth UK’,‘BH6 3BE’),         (4, ’505 Exeter Rd,Hawerby cum Beesby   Lincolnshire UK’,‘DN36 5RP’),         (5, ”,”),         (6, ’9472 Lind St,Desborough    Northamptonshire NN14 3GH UK’,‘NN14 2GH’),         (7, ’7457 Cowl St, #70    Bargate Ward  Southampton   UK’,‘SO14 3TY’),         (8, ”’The Pippins”, 20 Gloucester Pl, Chirton Ward,Tyne & Wear   UK’,‘NE29 7AD’),         (9, ’929 Augustine lane,  Staple Hill Ward     South Gloucestershire      UK’,‘BS16 4LL’),         (10, ’45 Bradfield road, ParwichDerbyshire    UK’,‘DE6 1QN’),         (11, ’63A Northampton St,Wilmington    Kent   UK’,‘DA2 7PP’),         (12, ’5 Hygeia avenue,    Loundsley Green WardDerbyshire    UK’,‘S40 4LY’),         (13, ’2150 Morley St,     Dee Ward      Dumfries and Galloway      UK’,‘DG8 7DE’),         (14, ’24 Bolton St,Broxburn, Uphall and Winchburg    West Lothian  UK’,‘EH52 5TL’),         (15, ’4 Forrest St,Weston-Super-Mare    North Somerset       UK’,‘BS23 3HG’),         (16, ’89 Noon St,  Carbrooke     Norfolk       UK’,‘IP25 6JQ’),         (17, ’99 Guthrie St,      New Milton    Hampshire     UK’,‘BH25 5DF’),         (18, ’7 Richmond St,      Parkham       Devon  UK’,‘EX39 5DJ’),         (19, ’9165 laburnum St,   Darnall Ward  Yorkshire, South     UK’,‘S4 7WN’)       insert into @IncorrectRows(Address_ID,TheAddress, ThePostcode)        SELECT Address_ID,TheAddress,ThePostCode FROM dbo.ExtractPostcode(@InputTable)       EXCEPT     SELECT Address_ID,TheAddress,ThePostCode FROM @outputTable; If @@RowCount>0        Begin        PRINT ‘The following rows gave ‘;     SELECT Address_ID,TheAddress,ThePostCode FROM @IncorrectRows        RAISERROR (‘These rows gave unexpected results.’,16,1);     end   /* For tear-down, we drop the user defined table type */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO /* once this is working, the development work turns from a chore into a delight and one ends up hitting execute so much more often to catch mistakes as soon as possible. It also prevents a wildly-broken routine getting into a build! */

    Read the article

< Previous Page | 16 17 18 19 20 21  | Next Page >