Search Results

Search found 9199 results on 368 pages for 'topological sort'.

Page 255/368 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Network Solutions Gold VIP Program

    - by GGBlogger
    Today I received an email advising me “Congratulations! You have been selected for the Network Solutions Gold VIP Program” Now I get a bunch of messages from Network Solutions because a long time ago I chose them as my registrar. I usually just save these to my Outlook Network Solutions folder and move on. This time my wife was in the room and I chuckled and said something about “Network Solutions has made me a Gold VIP member.” She said “So what does that mean?” Prompted by that I scrolled down the page to look at my “perks". Let’s see – Special pricing, 1 year free Web Forwarding, 1 hour of free support… etc etc until I hit “Enrollment in SafeRenew* SafeRenew* simplifies your renewal process. It protects your domain name registrations and corresponding services in the event you forget or are unable to renew on time. I’m thinking “Now ain’t that neat” I don’t use auto renew anyway and never have. Then I continued reading: Beginning June 24th, 2012 your domains* (that’s 10 days away) will automatically renew. To ensure continuation of service, please be certain you have a valid credit card on file. If you do not wish for your services to be automatically renewed, please click here to log into account manager and opt out of the SafeRenew™ service. Whoa Network Solutions is going to start spending my money without so much as a by your leave sir!!! I don’t bloody think so and how truly magnanimous of them I can “click here” to opt out of what I didn’t opt in for. Talk about furious! I still am. Now the kicker is: if my wife hadn’t been curious and if I’d had a working credit card on file I wouldn’t have know this until one of my unwanted domain names auto renewed. Of course I was told “Oh – we’d have refunded your money” to which I say bull. In my view Networks Solutions is guilty of a crime of some sort. They DO NOT have a right to spend my money without asking!!!!! So watch out folks.

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by Brian Dayton
    Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience.  My thoughts brought up more questions than answers such as: Context: -          HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? -          WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? -          WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: -          DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? -          HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.?   Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as  a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • Sites To Download Free eBooks For Kindle

    - by Gopinath
    Amazon Kindle is the top selling gadget of this holiday season and many of you would have received it as a gift. For those who got a Amazon Kindle here are few websites that offer free eBooks to fulfil reading appetite at no cost. 1. Free Kindle Books – Amazon Website – This page on Amazon lists nice collection of free books available for Kindle that includes Serial by Jack Kiborn, The Wild’s Call by Jeri Smith, Star Wars by John Jackson MIller and several other books from a list of 40 books. 2. Project Gutenberg: This site as 33,000 + free books that not work let you read on Kindle but also on iPad, PCs and smart phones.  This site is very popular for free ebooks. 3. Google E-Bookstore: Google’s eBookStore has thousands of free ebooks for Kindle in their free books section. 4. Internet Archive: Here you find millions of rare print works that are especially useful for academic research. Multiple language books are also available for Kindle. 5. Open Library: This site is sort of Wikipedia for eBooks with over 20 million user-contributed books and magazines. They are all Kindle friendly. 6. ManyBooks.net: Nearly 30,000 titles, many of which have been pulled from Project Gutenberg. Has a good collection of little-known Creative Commons works. 7. Freebooks.com – the public domain section of this site contains many free ebooks that are perfect for your Kindle. 8. freecomputerbooks.com, freetechbooks.com and onlinecomputerbooks.com - if you are geek and looking for technology books, this is the site you should visit to grab free books. Image credit: bike/flickr This article titled,Sites To Download Free eBooks For Kindle, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Chuck Norris Be Thy Name

    - by Robz / Fervent Coder
    Chuck Norris doesn’t program with a keyboard. He stares the computer down until it does what he wants. All things need a name. We’ve tossed around a bunch of names for the framework of tools we’ve been working on, but one we kept coming back to was Chuck Norris. Why did we choose Chuck Norris? Well Chuck Norris sort of chose us. Everything we talked about, the name kept drawing us closer to it. We couldn’t escape Chuck Norris, no matter how hard we tried. So we gave in. Chuck Norris can divide by zero. What is the Chuck Norris Framework? @drusellers and I have been working on a variety of tools: WarmuP - http://github.com/chucknorris/warmup (Template your entire project/solution and create projects ready to code - From Zero to a Solution with everything in seconds. Your templates, your choices.) UppercuT - http://projectuppercut.org (Build with Conventions - Professional Builds in Moments, Not Days!) | Code also at http://github.com/chucknorris/uppercut DropkicK - http://github.com/chucknorris/dropkick (Deploy Fluently) RoundhousE - http://projectroundhouse.org (Professional Database Management with Versioning) | Code also at http://github.com/chucknorris/roundhouse SidePOP - http://sidepop.googlecode.com (Does your application need to check email?) HeadlocK - http://github.com/chucknorris/headlock (Hash a directory so you can later know if anything has changed) Others – still in concept or vaporware People ask why we choose such violent names for each tool of our framework? At first it was about whipping your code into shape, but after awhile the naming became, “How can we relate this to Chuck Norris?” People also ask why we uppercase the last letter of each name. Well, that’s more about making you ask questions…but there are a few reasons for it. Project managers never ask Chuck Norris for estimations…ever. The class object inherits from Chuck Norris Chuck Norris doesn’t need garbage collection because he doesn’t call .Dispose(), he calls .DropKick() So what are you waiting for? Join the Google group today, download and play with the tools. And lastly, welcome to Chuck Norris. Or should I say Chuck Norris welcomes you…

    Read the article

  • Finally home - and something fully off topic

    - by Mike Dietrich
    Arrived at Munich Pasing last night at 0:50am ... finally :-) On Sunday I've left the Dylan Hotel in Dublin (thanks to the staff there as well: you were REALLY helpful!!) around 7:30pm to go to the port - and came home on Tuesday morning 1:15am. So all together 29:45hrs door-to-door - not bad for nearly 2000km just relying on public transport. And could have been faster if there were seats in ealier TGV's left. But I don't complain at all ;-) Just checked the website of Dublin Airport - it says currently: 17.00pm: Latest on flight disruptions at Dublin Airport The IAA have advised us that based on the latest Volcanic Ash Advisory Centre London Dublin Airport will remain closed for all inbound and outbound commercial flights until 20.00hours. This effectively means that no flights will land or take off at Dublin Airport until then. A further update will be posted this afternoon. When traveling I have always my iPod with me. It has gotten a bit old now (I think I've bought it 3 years ago in November 2007) but it has a 160GB hard disk in it so it fits most of my music collection (not the entire collection anymore as I'm currently re-riping everything to Apple Lossless because at least for my ears it makes a big difference - but I listen to good ol' vinyl as well ...and I don't download compressed music ;-) ). The battery of my little travel companion is still good for more than 20 hours consistent music playback - and there was a band from Texas being in my ears most of the whole journey called Midlake. I haven't heard of them before until I asked a lady at a Munich store some few weeks ago what she's playing on the speakers in the shop. She was amazed and came back with the CD cover but I hesitated to buy it as I always want to listen the tunes before - and at this day I had no time left to do so. But in Dublin I had a bit of spare time on Saturday and I always enter record stores - and the Tower Records was the sort of store I really enjoy and so I've spent there nearly two hours - leaving with 3 Midlake CDs in my bag. So if you are interested just listen those tunes which may remind some people on Fleetwood Mac: As I said in the title, fully off topic ;-)

    Read the article

  • Development processes, the use of version control, and unit-testing

    - by ct01
    Preface I've worked at quite a few "flat" organizations in my time. Most of the version control policy/process has been "only commit after it's been tested". We were constantly committing at each place to "trunk" (cvs/svn). The same was true with unit-testing - it's always been a "we need to do this" mentality but it never really materializes in a substantive form b/c there is no institutional knowledge base to do it - no mentorship. Version Control The emphasis for version control management at one place was a very strict protocol for commit messages (format & content). The other places let employees just do "whatever". The branching, tagging, committing, rolling back, and merging aspect of things was always ill defined and almost never used. This sort of seems to leave the version control system in the position of being a fancy file-storage mechanism with a meta-data component that never really gets accessed/utilized. (The same was true for unit testing and committing code to the source tree) Unit tests It seems there's a prevailing "we must/should do this" mentality in most places I've worked. As a policy or standard operating procedure it never gets implemented because there seems to be a very ill-defined understanding about what that means, what is going to be tested, and how to do it. Summary It seems most places I've been to think version control and unit testing is "important" b/c the trendy trade journals say it is but, if there's very little mentorship to use these tools or any real business policies, then the full power of version control/unit testing is never really expressed. So grunts, like myself, never really have a complete understanding of the point beyond that "it's a good thing" and "we should do it". Question I was wondering if there are blogs, books, white-papers, or online journals about what one could call the business process or "standard operating procedures" or uses cases for version control and unit testing? I want to know more than the trade journals tell me and get serious about doing these things. PS: @Henrik Hansen had a great comment about the lack of definition for the question. I'm not interested in a specific unit-testing/versioning product or methodology (like, XP) - my interest is more about work-flow at the individual team/developer level than evangelism. This is more-or-less a by product of the management situation I've operated under more than a lack of reading software engineering books or magazines about development processes. A lot of what I've seen/read is more marketing oriented material than any specifically enumerated description of "well, this is how our shop operates".

    Read the article

  • Search and Browse Database Objects with Oracle SQL Developer

    - by thatjeffsmith
    I was tempted to throw in another Dora the Explorer Map reference here, but I came to my senses.Having trouble finding something? Maybe you’re just getting older? I know I am. But still, it’d be nice if my favorite database tool could help me out a bit. Hmmm, what’s this ‘Find Database Object‘ thing over here…sounds like a search mechanism of some sort? You can access this panel from the ‘View‘ menu. It’s a good bit down the screen, so I don’t blame you if you haven’t seen it before. It makes finding ‘stuff’ in your database so much easier. Let’s say I want to find my ‘beer’ objects. I simply need to type my search string and the context (in this case I want it to search EVERYTHING), and hit enter. The search results are listed below and clicking on an object automatically opens it! I know it seems very simple, but I get asked this question a LOT. It will even search through your PL/SQL code! Finding too much? Be sure to toggle off the ‘%’ wildcard check box before doing a search. Working on a Project? I bet you use common column names, or codes, throughout your tables. You could take advantage of this knowledge and use the Find Database Object panel as a substitute connection tree or schema browser. Working on your HR project and want to look at your employee objects? Do a column search for your column ID/key. Sometimes thinking outside the box actually works! Don’t be afraid to tackle a problem from a weird angle, or re-purpose your tools. I do it all the time And I drive the developers nuts trying to do things with the tools they were never designed to do. But I digress. Back to your coding!

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • Isometric screen to 3D world coordinates efficiently

    - by Justin
    Been having a difficult time transforming 2D screen coordinates to 3D isometric space. This is the situation where I am working in 3D but I have an orthographic camera. Then my camera is positioned at (100, 200, 100), Where the xz plane is flat and y is up and down. I've been able to get a sort of working solution, but I feel like there must be a better way. Here's what I'm doing: With my camera at (0, 1, 0) I can translate my screen coordinates directly to 3D coordinates by doing: mouse2D.z = (( event.clientX / window.innerWidth ) * 2 - 1) * -(window.innerWidth /2); mouse2D.x = (( event.clientY / window.innerHeight) * 2 + 1) * -(window.innerHeight); mouse2D.y = 0; Everything okay so far. Now when I change my camera back to (100, 200, 100) my 3D space has been rotated 45 degrees around the y axis and then rotated about 54 degrees around a vector Q that runs along the xz plane at a 45 degree angle between the positive z axis and the negative x axis. So what I do to find the point is first rotate my point by 45 degrees using a matrix around the y axis. Now I'm close. So then I rotate my point around the vector Q. But my point is closer to the origin than it should be, since the Y value is not 0 anymore. What I want is that after the rotation my Y value is 0. So now I exchange my X and Z coordinates of my rotated vector with the X and Z coordinates of my non-rotated vector. So basically I have my old vector but it's y value is at an appropriate rotated amount. Now I use another matrix to rotate my point around the vector Q in the opposite direction, and I end up with the point where I clicked. Is there a better way? I feel like I must be missing something. Also my method isn't completely accurate. I feel like it's within 5-10 coordinates of where I click, maybe because of rounding from many calculations. Sorry for such a long question.

    Read the article

  • SQL SERVER – Introduction to PERCENTILE_DISC() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function PERCENTILE_DISC(). The book online gives following definition of this function: Computes a specific percentile for sorted values in an entire rowset or within distinct partitions of a rowset in Microsoft SQL Server 2012 Release Candidate 0 (RC 0). For a given percentile value P, PERCENTILE_DISC sorts the values of the expression in the ORDER BY clause and returns the value with the smallest CUME_DIST value (with respect to the same sort specification) that is greater than or equal to P. If you are clear with understanding of the function – no need to read further. If you got lost here is the same in simple words – find value of the column which is equal or more than CUME_DIST. Before you continue reading this blog I strongly suggest you read about CUME_DIST function over here Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: You can see that I have used PERCENTILE_DISC(0.5) in query, which is similar to finding median but not exactly. PERCENTILE_DISC() function takes a percentile as a passing parameters. It returns the value as answer which value is equal or great to the percentile value which is passed into the example. For example in above example we are passing 0.5 into the PERCENTILE_DISC() function. It will go through the resultset and identify which rows has values which are equal to or great than 0.5. In first example it found two rows which are equal to 0.5 and the value of ProductID of that row is the answer of PERCENTILE_DISC(). In some third windowed resultset there is only single row with the CUME_DIST() value as 1 and that is for sure higher than 0.5 making it as a answer. To make sure that we are clear with this example properly. Here is one more example where I am passing 0.6 as a percentile. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist, PERCENTILE_DISC(0.6) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS PercentileDisc FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The result of the PERCENTILE_DISC(0.6) is ProductID of which CUME_DIST() is more than 0.6. This means for SalesOrderID 43670 has row with CUME_DIST() 0.75 is the qualified row, resulting answer 773 for ProductID. I hope this explanation makes it further clear. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Scrum and Team Consolidation

    - by John K. Hines
    I’m still working my way through one of the more painful team consolidations of my career.  One thing that’s made it hard was my assumption that the use of Agile methods and Scrum would make everything easy.  Take three teams, make all work visible, track it, and presto: An efficient, functioning software development team. What I’ve come to realize is that the primary benefit of Scrum is that Scrum brings teams closer to their customers.  Frequent meetings, short iterations, and phased deployments are all meant to keep the customer in the loop.  It’s true that as teams become proficient with Scrum they tend to become more efficient.  But I don’t think it’s true that Scrum automatically helps people work together. Instead, Scrum can point out when teams aren’t good at working together.   And it really illustrates when teams, especially teams in sustaining mode, are reacting to their customers instead of innovating with them.  At the moment we’ve inherited a huge backlog of tools, processes, and personalities.  It’s up to us to sort them all out.  Unfortunately, after 7 &frac12; months we’re still sorting. What I’d recommend for any blended team is to look at your current product lifecycles and work on a single lifecycle for all work.  If you can’t objectively come up with one process, that’s a good indication that the new team might not be a good fit for being a single unit (which happens all the time in bigger companies).  Go ahead & self-organize into sub-teams.  Then repeat the process. If you can come up with a single process, tackle each piece and standardize all of them.  Do this as soon as possible, as it can be uncomfortable.  Standardize your requirements gathering and tracking, your exploration and technical analysis, your project planning, development standards, validation and sustaining processes.  Standardize all of it.  Make this your top priority, get it out of the way, and get back to work. Lastly, managers of blended teams should realize what I’m suggesting is a disruptive process.  But you’ve just reorganized the team is already disrupted.   Don’t pull the bandage off slowly and force the team through a prolonged transition phase, lowering their productivity over the long term.  You can role model leadership to your team and drive a true consolidation.  Destroy roadblocks, reassure those on your team who are afraid of change, and push forward to create something efficient and beautiful.  Then use Scrum to reengage your customers in a way that they’ll love. Technorati tags: Scrum Scrum Process

    Read the article

  • IT Admin for Thrill Seekers

    - by Tony Davis
    A developer suggested to me recently that the life of the DBA was, surely, a dull one. My first reaction was indignation, but quickly followed by the thought that for many people excitement isn't necessarily the most desirable aspect of their job. It's true that some aspects of the DBA role seem guaranteed to quieten the pulse; in the days of tape backups, time must have slowed to eternity for the person whose job it was to oversee this process, placing tapes into secure containers, ensuring correct labeling, and.sorry, I drifted off there for a second. On the other hand, if you follow the adventures of the likes of Brent Ozar or Tom LaRock, you'd be forgiven for thinking that much of a database guy's time is spent, metaphorically, diving through plate glass windows in tight fitting underwear in order to extract grateful occupants from burning database applications. Alas it isn't true of the majority, but it isn't as dull as some people imagine, and is a helter-skelter ride compared with some other IT roles. Every IT department has people who toil away in shadowy corners doing quiet but mysterious tasks. When you ask them to explain what they do, you almost immediately want them to stop, but you hear enough to appreciate that these tasks are often absolutely vital to the smooth functioning of an IT organization. Compared with them, the DBAs are prima donnas. Here are a few nominations: Installation engineer - install all of the company's laptops and workstations, and software, deal with licensing, shipping and data entry.many organizations, especially those subject to tight regulation, would simply grind to a halt without their efforts. Localization engineer - Not quite software engineering, not quite translation, the job is to rebuild a product in a different language and make sure everything still works. QA Tester - firstly, I should say that the testers at Red Gate seem to me some of the most-fulfilled in the company. I refer here to the QA Tester whose job is more-or-less entirely to read a script, click some buttons and make sure the actual and expected values match. Configuration manager - for example, someone whose main job is to configure build environments so that devs can access their source code; assuredly necessary for the smooth functioning and productivity of the team, and hopefully well-paid. So what other sort of job in IT should one choose if the work of a DBA proves to be too exciting? Or are these roles secretly more exciting than many imagine? I invite you all to put forward your own suggestions. Cheers, Tony.

    Read the article

  • Idea to develop a caching server between IIS and SQL Server

    - by John
    I work on a few high traffic websites that all share the same database and that are all heavily database driven. Our SQL server is max-ed out and, although we have already implemented many changes that have helped but the server is still working too hard. We employ some caching in our website but the type of queries we use negate using SQL dependency caching. We tried SQL replication to try and kind of load balance but that didn't prove very successful because the replication process is quite demanding on the servers too and it needed to be done frequently as it is important that data is up to date. We do use a Varnish web caching server (Linux based) to take a bit of the load off both the web and database server but as a lot of the sites are customised based on the user we can only do so much. Anyway, the reason for this question... Varnish gave me an idea for a possible application that might help in this situation. Just like Varnish sits between a web browser and the web server and caches response from the web server, I was wondering about the possibility of creating something that sits between the web server and the database server. Imagine that all SQL queries go through this SQL caching server. If it's a first time query then it will get recorded, and the result requested from the SQL server and stored locally on the cache server. If it's a repeat request within a set time then the result gets retrieved from the local copy without the query being sent to the SQL server. The caching server could also take advantage of SQL dependency caching notifications. This seems like a good idea in theory. There's still the same amount of data moving back and forward from the web server, but the SQL server is relieved of the work of processing the repeat queries. I wonder about how difficult it would be to build a service that sort of emulates requests and responses from SQL server, whether SQL server's own caching is doing enough of this already that this wouldn't be a benefit, or even if someone has done this before and I haven't found it? I would welcome any feedback or any references to any relevant projects.

    Read the article

  • Keyboard Shortcuts in Oracle SQL Developer

    - by thatjeffsmith
    The CTRL key, which stands for ConTRoL…aw, the good ole days What keyboard shortcuts should EVERY Oracle SQL Developer user know? How do you find new shortcuts to master, and how do you change them to match ones you’ve already learned in other tools? These are the driving questions for today’s post. While some of us may be keyboard ninjas, and others are more driven to use the mouse – everyone has probably picked up a few strategic keyboard shortcuts over the years. For example, I’ve personally JUST memorized the Cmd-Shift-4 ‘trick’ in Mac OS X. And of course we all know what F1 does, right? Right?!? Here are a few more keyboard shortcuts to commit to memory. My Favorite SQL Developer Shortcuts ctrl-enter : executes the current statement(s) F5 : executes the current code as a script (think SQL*Plus) ctrl-space : invokes code insight on demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) ctrl-Up/Dn : replaces worksheet with previous/next SQL from SQL History ctrl-shift+Up/Dn : same as above but appends instead of replaces shift+F4 : opens a Describe window for current object at cursor ctrl+F7 : format SQL ctrl+/ : toggles line commenting ctrl+e : incremental search Configuring Keyboard Shortcuts in SQL Developer Tools Preferences Shortcut Keys Search by command name OR the keystroke itself Some tips… Sort by category Pay special attention to the ‘Code Editor’ and ‘Other’ categories Mind the conflicts when you change the defaults Be nice – share! You can save your new mappings with your co-workers using the Export and Import buttons Click on ‘More Actions’ to expose the Import and Export buttons When I get ‘bored’ or if I think I might be missing something, I peruse the Code Editor and Other categories, again! I’ve picked up quite a few cool editor tricks here. Then I blog about them, like they’re ‘magic.’ #EvilLaugh But the main tip is this – don’t let your previously memorized keyboard shortcuts SHORTCUT your usage of SQL Developer. If your fingers have already memorized some keystrokes, just re-program SQL Developer to match! What’s your favorite shortcut? I’ll use the most popular shortcut mentioned in the comments to round out my Top 10 list above!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Auto Mocking using JustMock

    - by mehfuzh
    Auto mocking containers are designed to reduce the friction of keeping unit test beds in sync with the code being tested as systems are updated and evolve over time. This is one sentence how you define auto mocking. Of course this is a more or less formal. In a more informal way auto mocking containers are nothing but a tool to keep your tests synced so that you don’t have to go back and change tests every time you add a new dependency to your SUT or System Under Test. In Q3 2012 JustMock is shipped with built in auto mocking container. This will help developers to have all the existing fun they are having with JustMock plus they can now mock object with dependencies in a more elegant way and without needing to do the homework of managing the graph. If you are not familiar with auto mocking then I won't go ahead and educate you rather ask you to do so from contents that is already made available out there from community as this is way beyond the scope of this post. Moving forward, getting started with Justmock auto mocking is pretty simple. First, I have to reference Telerik.JustMock.Container.DLL from the installation folder along with Telerik.JustMock.DLL (of course) that it uses internally and next I will write my tests with mocking container. It's that simple! In this post first I will mock the target with dependencies using current method and going forward do the same with auto mocking container. In short the sample is all about a report builder that will go through all the existing reports, send email and log any exception in that process. This is somewhat my  report builder class looks like: Reporter class depends on the following interfaces: IReporBuilder: used to  create and get the available reports IReportSender: used to send the reports ILogger: used to log any exception. Now, if I just write the test without using an auto mocking container it might end up something like this: Now, it looks fine. However, the only issue is that I am creating the mock of each dependency that is sort of a grunt work and if you have ever changing list of dependencies then it becomes really hard to keep the tests in sync. The typical example is your ASP.NET MVC controller where the number of service dependencies grows along with the project. The same test if written with auto mocking container would look like: Here few things to observe: I didn't created mock for each dependencies There is no extra step creating the Reporter class and sending in the dependencies Since ILogger is not required for the purpose of this test therefore I can be completely ignorant of it. How cool is that ? Auto mocking in JustMock is just released and we also want to extend it even further using profiler that will let me resolve not just interfaces but concrete classes as well. But that of course starts the debate of code smell vs. working with legacy code. Feel free to send in your expert opinion in that regard using one of telerik’s official channels. Hope that helps

    Read the article

  • MVC data binding

    - by user441521
    I'm using MVC but I've read that MVVM is sort of about data binding and having pure markup in your views that data bind back to the backend via the data-* attributes. I've looked at knockout but it looks pretty low level and I feel like I can make a library that does this and is much easier to use where basically you only need to call 1 javascript function that will data bind your entire page because of the data-* attributes you assign to html elements. The benefits of this (that I see) is that your view is 100% decoupled from your back-end so that a given view never has to be changed if your back-end changes (ie for asp.net people no more razor in your view that makes your view specific to MS). My question would be, I know there is knockout out there but are there any others that provide this data binding functionality for MVC type applications? I don't want to recreate something that may already exist but I want to make something "better" and easier to use than knockout. To give an example of what I mean here is all the code one would need to get data binding in my library. This isn't final but just showing the idea that all you have to do is call 1 javascript function and set some data-* attribute values and everything ties together. Is this worth seeing through? <script> $(function () { // this is all you have to call to make databinding for POST or GET to work DataBind(); }); </script> <form id="addCustomer" data-bind="Customer" data-controller="Home" data-action="CreateCustomer"> Name: <input type="text" data-bind="Name" data-bind-type="text" /> Birthday: <input type="text" data-bind="Birthday" data-bind-type="text" /> Address: <input type="text" data-bind="Address" data-bind-type="text" /> <input type="submit" value="Save" id="btnSave" /> </form> ================================================= // controller action [HttpPost] public string CreateCustomer(Customer customer) { if(customer.Name == "Rick") return "success"; return "failure"; } // model public class Customer { public string Name { get; set; } public DateTime Birthday { get; set; } public string Address { get; set; } }

    Read the article

  • Detecting if someone is in a room in your house and sending you an email.

    - by mbcrump
    Let me setup this scenario: You are selling your house. You have small children. (Possibly 2 rug rats or more) The real estate company calls and says they have a showing for your house between the hours of 3pm-6pm. You have to keep the children occupied. You realize this is the 5th time you have shown your house this week. What is a programmer to do?……Setup a webcam, find a motion detection software that has support to launch a program and of course, Visual Studio 2010. First, comes the tools Some sort of webcam, I chose the WinBook because a friend of mine loaned it to me. It is a basic USB2.0 camera that supports 640x480 without software.  Next up was find webcam software that supports launching a program. WebcamXP support this. VS 2010 Console Application. A cell phone that you can check your email. You may be asking, why write code to send the email when a lot of commercial software motion detection packages include that as base functionality. Well, first it cost money and second I don’t want the picture of the person as that probably invades privacy and as a future buyer, I don’t want someone recording me in their house. Now onto the show... First, the code part. We are going to create a VS2010 or whatever version you have installed and use the following code snippet. Code Snippet using System; using System.Net.Mail; using System.Net;     namespace MotionDetectionEmailer {     class Program     {         static void Main(string[] args)         {             try             {                 MailMessage m = new MailMessage                    ("[email protected]",                     "[email protected]",                     "Motion Detected at " + DateTime.Now,                     "Someone is in the downstairs basement.");                 SmtpClient client = new SmtpClient("smtp.charter.net");                 client.Credentials = new NetworkCredential("mbcrump", "NOTTELLINGYOU");                 client.Send(m);             }               catch (SmtpException ex)             {                 Console.WriteLine("Who cares?? " + ex.ToString());             }         }       } } Second, Download and install wecamxp and select the option to launch an external program and you are finished. Now, when you are at MCDonalds and can check your email on your phone, you will see when they entered the house and you can go back home without waiting the full 3 hours. --- NICE!

    Read the article

  • How to implement line of sight restriction in actionscript?

    - by Michiel Standaert
    I have a problem with a game i am programming. I am making some sort of security game and i would like to have some visual line of sight. The problem is that i can't restrict my line of sight so my cops can't see through the walls. Below you find the design, in which they can look through windows, but not walls. Further below you find an illustration of what my problem is exactly. this is what it looks like now. As you can see, the cops can see through walls. This is the map i would want to use to restrict the line of sight. So the way i am programming the line of sight now is just by calculating some points and drawing the sight accordingly, as shown below. Note that i also check for a hittest using bitmapdata to check whether or not my player has been spotted by any of the cops. private function setSight(e:Event=null):Boolean { g = copCanvas.graphics; g.clear(); for each(var cop:Cop in copCanvas.getChildren()) { var _angle:Number = cop.angle; var _radians:Number = (_angle * Math.PI) / 180; var _radius:Number = 50; var _x1:Number = cop.x + (cop.width/2); var _y1:Number = cop.y + (cop.height/2); var _baseX:Number = _x1 + (Math.cos(_radians) * _radius); var _baseY:Number = _y1 - (Math.sin(_radians) * _radius); var _x2:Number = _baseX + (25 * Math.sin(_radians)); var _y2:Number = _baseY + (25 * Math.cos(_radians)); var _x3:Number = _baseX - (25 * Math.sin(_radians)); var _y3:Number = _baseY - (25 * Math.cos(_radians)); g.beginFill(0xff0000, 0.3); g.moveTo(_x1, _y1); g.lineTo(_x2, _y2); g.lineTo(_x3, _y3); g.endFill(); } var _cops:BitmapData = new BitmapData(width, height, true, 0); _cops.draw(copCanvas); var _bmpd:BitmapData = new BitmapData(10, 10, true, 0); _bmpd.draw(me); if(_cops.hitTest(new Point(0, 0), 10, _bmpd, new Point(me.x, me.y), 255)) { gameover.alpha = 1; setTimeout(function():void{ gameover.alpha = 0; }, 5000); stop(); return true; } return false; } So now my question is: Is there someone who knows how to restrict the view so that the cops can't look through the walls? Thanks a lot in advance. ps: i have already looked at this tutorial by emanuele feronato, but i can't use the code to restric the visual line of sight.

    Read the article

  • Book Review (Book 10) - The Information: A History, a Theory, a Flood

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for March 2012 was: The Information: A History, a Theory, a Flood by James Gleick. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: My personal belief about computing is this: All computing technology is simply re-arranging data. We take data in, we manipulate it, and we send it back out. That’s computing. I had heard from some folks about this book and it’s treatment of data. I heard that it dealt with the basics of data - and the semantics of data, information and so on. It also deals with the earliest forms of history of information, which fascinates me. It’s similar I was told, to GEB which a favorite book of mine as well, so that was a bonus. Some folks I talked to liked it, some didn’t - so I thought I would check it out. What I learned: I liked the book. It was longer than I thought - took quite a while to read, even though I tend to read quickly. This is the kind of book you take your time with. It does in fact deal with the earliest forms of human interaction and the basics of data. I learned, for instance, that the genesis of the binary communication system is based in the invention of telegraph (far-writing) codes, and that the earliest forms of communication were expensive. In fact, many ciphers were invented not to hide military secrets, but to compress information. A sort of early “lol-speak” to keep the cost of transmitting data low! I think the comparison with GEB is a bit over-reaching. GEB is far more specific, fanciful and so on. In fact, this book felt more like something fro Richard Dawkins, and tended to wander around the subject quite a bit. I imagine the author doing his research and writing each chapter as a book that followed on from the last one. This is what possibly bothered those who tended not to like it, I think. Towards the middle of the book, I think the author tended to be a bit too fragmented even for me. He began to delve into memes, biology and more - I think he might have been better off breaking that off into another work. The existentialism just seemed jarring. All in all, I liked the book. I recommend it to any technical professional, specifically ones involved with data technology in specific. And isn’t that all of us? :)

    Read the article

  • Open the SQL Server Error Log with PowerShell

    - by BuckWoody
    Using the Server Management Objects (SMO) library, you don’t even need to have the SQL Server 2008 PowerShell Provider to read the SQL Server Error Logs – in fact, you can use regular old everyday PowerShell. Keep in mind you will need the SMO libraries – which can be installed separately or by installing the Client Tools from the SQL Server install media. You could search for errors, store a result as a variable, or act on the returned values in some other way. Replace the Machine Name with your server and Instance Name with your instance, but leave the quotes, to make this work on your system: [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") $machineName = "UNIVAC" $instanceName = "Production" $sqlServer = new-object ("Microsoft.SqlServer.Management.Smo.Server") "$machineName\$instanceName" $sqlServer.ReadErrorLog() Want to search for something specific, like the word “Error”? Replace the last line with this: $sqlServer.ReadErrorLog() | where {$_.Text -like "Error*"} Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Poll Results: Foreign Key Constraints

    - by Darren Gosbell
    A few weeks ago I did the following post asking people – if they used foreign key constraints in their star schemas. The poll is still open if you are interested in adding to it, but here is what the chart looks like as of today. (at the bottom of the poll itself there is a link to the live results, unfortunately I cannot link the live results in here as the blogging platform blocks the required javascript)   Interestingly the results are fairly even. Of the 78 respondents, fractionally over half at least aim to start with referential integrity in their star schemas. I did not want to influence the results by sharing my opinion, but my personal preference is to always aim to have foreign key constraints. But at the same time, I am pragmatic about it, I do have projects where for various reasons some constraints are not defined. And I also have other designs that I have inherited, where it would just be too much work to go back and add foreign key constraints. If you are going to implement foreign keys in your star schema, they really need to be there at the start. In fact this poll was was the result of a feature request for BIDSHelper asking for a feature to check for null/missing foreign keys and I am entirely convinced that BIDS is the wrong place for this sort of functionality. BIDS is a design tool, your data needs to be constantly checked for consistency. It's not that I think that it's impossible to get a design working without foreign key constraints, but I like the idea of failing as soon as possible if there is an error and enforcing foreign key constraints lets me "fail early" if there are constancy issues with my data. By far the biggest concern with foreign keys is performance and I suppose I'm curious as to how often people actually measure and quantify this. I worked on a project a number of years ago that had very large data volumes and we did find that foreign key constraints did have a measurable impact, but what we did was to disable the constraints before loading the data, then enabled and checked them afterwards. This saved as time (although not as much as not having constraints at all), but still let us know early in the process if there were any consistency issues. For the people that do not have consistent data, if you have ETL processes that you control that are building your star schema which you also control, then to be blunt you only have yourself to blame. It is the job of the ETL process to make the data consistent. There are techniques for handling situations like missing data as well as  early and late arriving data. Ralph Kimball's book – The Data Warehouse Toolkit goes through some design patterns for handling data consistency. Having foreign key relationships can also help the relational engine to optimize queries as noted in this recent blog post by Boyan Penev

    Read the article

  • AI for a mixed Turn Based + Real Time battle system - Something "Gambit like" the right approach?

    - by Jason L.
    This is maybe a question that's been asked 100 times 1,000 different ways. I apologize for that :) I'm in the process of building the AI for a game I'm working on. The game is a turn based one, in the vein of Final Fantasy but also has a set of things that happen in real time (reactions). I've experimented with FSM, HFSMs, and Behavior Trees. None of them felt "right" to me and all felt either too limiting or too generic / big. The idea I'm toying with now is something like a "Rules engine" that could be likened to the Gambit system from Final Fantasy 12. I would have a set of predefined personalities. Each of these personalities would have a set of conditions it would check on each event (Turn start, time to react, etc). These conditions would be priority ordered, and the first one that returns true would be the action I take. These conditions can also point to a "choice" action, which is just an action that will make a choice based on some Utility function. Sort of a mix of FSM/HFSM and a Utility Function approach. So, a "gambit" with the personality of "Healer" may look something like this: (ON) Ally HP = 0% - Choose "Relife" spell (ON) Ally HP < 50% - Choose Heal spell (ON) Self HP < 65% - Choose Heal spell (ON) Ally Debuff - Choose Debuff Removal spell (ON) Ally Lost Buff - Choose Buff spell Likewise, a "gambit" with the personality of "Agressor" may look like this: (ON) Foe HP < 10% - Choose Attack skill (ON) Foe any - Choose target - Choose Attack skill (ON) Self Lost Buff - Choose Buff spell (ON) Foe HP = 0% - Taunt the player What I like about this approach is it makes sense in my head. It also would be extremely easy to build an "AI Editor" with an approach like this. What I'm worried about is.. would it be too limiting? Would it maybe get too complicated? Does anyone have any experience with AIs in Turn Based games that could maybe provide me some insight into this approach.. or suggest a different approach? Many thanks in advance!!!

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >