Search Results

Search found 1832 results on 74 pages for 'ben von handorf'.

Page 15/74 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Dell Docking Station Doesn’t Detect USB Mouse and Keyboard

    - by Ben Griswold
    I’ve found myself in this situation with multiple Dell docking stations and multiple Dell laptops running various Windows operating systems.  I don’t know why the docking station stops recognizing my USB mouse and keyboard – it just does.  It’s black magic.  The last time around I just starting plugging the mouse and keyboard into the docked laptop directly and went about my business (as if I wasn’t completing missing out on a couple of the core benefits of using a docking station.)  I guess that’s what happens when you forget how you got yourself out of the mess the last time around.  I had been in this half-assed state for a couple of weeks now, but a coworker fortunately got themselves in and out of the same pickle this morning.  Procrastinate long enough and the solution will just come to you, right? Here’s how to get yourself out of this mess: Undock your computer Unplug your docking station Count to an arbitrary number greater than 12.  (Not sure this is really required, but…) Plug your docking station back in Redock your machine I put my machine to sleep before taking the aforementioned actions.  My coworker completely shutdown his laptop instead.  The steps worked on both of our Win 7 machines this morning and, who knows, it might just work for you too. 

    Read the article

  • When should method overloads be refactored?

    - by Ben Heley
    When should code that looks like: DoThing(string foo, string bar); DoThing(string foo, string bar, int baz, bool qux); ... DoThing(string foo, string bar, int baz, bool qux, string more, string andMore); Be refactored into something that can be called like so: var doThing = new DoThing(foo, bar); doThing.more = value; doThing.andMore = otherValue; doThing.Go(); Or should it be refactored into something else entirely? In the particular case that inspired this question, it's a public interface for an XSLT templating DLL where we've had to add various flags (of various types) that can't be embedded into the string XML input.

    Read the article

  • Introduction to Lean Software Development and Kanban Systems

    - by Ben Griswold
    Last year I took myself through a crash course on Lean Software Development and Kanban Systems in preparation for an in-house presentation.  I learned a bunch.  In this series, I’ll be sharing what I learned with you.   If your career looks anything like mine, you have probably been affiliated with a company or two which pushed requirements gathering and documentation to the nth degree. To add insult to injury, they probably added planning process (documentation, requirements, policies, meetings, committees) to the extent that it possibly retarded any progress. In my opinion, the typical company resembles the quote from Tom DeMarco. It isn’t enough just to do things right – we also had to say in advance exactly what we intended to do and then do exactly that. In the 1980s, Toyota turned the tables and revolutionize the automobile industry with their approach of “Lean Manufacturing.” A massive paradigm shift hit factories throughout the US and Europe. Mass production and scientific management techniques from the early 1900’s were questioned as Japanese manufacturing companies demonstrated that ‘Just-in-Time’ was a better paradigm. The widely adopted Japanese manufacturing concepts came to be known as ‘lean production’. Lean Thinking capitalizes on the intelligence of frontline workers, believing that they are the ones who should determine and continually improve the way they do their jobs. Lean puts main focus on people and communication – if people who produce the software are respected and they communicate efficiently, it is more likely that they will deliver good product and the final customer will be satisfied. In time, the abstractions behind lean production spread to logistics, and from there to the military, to construction, and to the service industry. As it turns out, principles of lean thinking are universal and have been applied successfully across many disciplines. Lean has been adopted by companies including Dell, FedEx, Lens Crafters, LLBean, SW Airlines, Digital River and eBay. Lean thinking got its name from a 1990’s best seller called The Machine That Changed the World : The Story of Lean Production. This book chronicles the movement of automobile manufacturing from craft production to mass production to lean production. Tom and Mary Poppendieck, that is.  Here’s one of their books: Implementing Lean Software Thinking: From Concept to Cash Our in-house presentations are supposed to run no more than 45 minutes.  I really cranked and got through my 87 slides in just under an hour. Of course, I had to cheat a little – I only covered the 7 principles and a single practice. In the next part of the series, we’ll dive into Principle #1: Eliminate Waste. And I am going to be a little obnoxious about listing my Lean and Kanban references with every series post.  The references are great and they deserve this sort of attention. 

    Read the article

  • T4Toolbox and Visual Studio 2010

    - by Ben Griswold
    I’ve been using the T4Toolbox to help generate my ASP.NET MVC models and scaffolding for a while now.  Another developer tried using my generator project last week and ran into troubles due to a breaking change around the RenderCore() and TransformText() methods in support for VS 2010.  If you upgraded to the latest version of T4Toolbox and receive a build error similar to the following, you are probably in the same boat: GeneratedTextTransformation.[Template].RenderCore(): no suitable method found to override We took the easy way out.  I had him uninstall the latest version of T4Toolbox and install version 9.7.25.1 which my templates were initially coded against.  For now, that worked great, but it sounds like I’ll be doing some rework of the 20+ templates in my project to support Visual Studio 2010 when we migrate later this month.

    Read the article

  • Configure SQL Server to Allow Remote Connections

    - by Ben Griswold
    Okay. This post isn’t about configuring SQL to allow remote connections, but wait, I still may be able to help you out. "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server)" I love this exception. It summarized the issue and leads you down a path to solving the problem.  I do wish the bit about allowing remote connections was left out of the message though. I can’t think of a time when having remote connections disabled caused me grief.  Heck, I can’t ever remember how to enable remote connections unless I Google for the answer. Anyway, 9 out of 10 times, SQL Server simply isn’t running.  That’s why the exception occurs.  The next time this exception pops up, open up the services console and make sure SQL Server is started.  And if that’s not the problem, only then start digging into the other possible reasons for the failure.

    Read the article

  • SVN Export or Recursively Remove .SVN Folders

    - by Ben Griswold
    I shared this script with a coworker yesterday. It doesn’t do much; it recursively deletes .svn folders from a source tree.  It comes in handy if you want to share your codebase or you get in a terrible spot with SVN and you just want to start all over. Just blow away all svn artifacts and use your mulligan. It’s true. You can nearly get the same result using the SVN export command which copies your source sans the .svn folders to an alternate location.  The catch is an export only includes those files/folders which exist under version control.  If you want a clean copy of your source – versioned or not – export just might not do. The contents of the .cmd file include the following: for /f "tokens=* delims=" %%i in (’dir /s /b /a:d *.svn’) do ( rd /s /q "%%i" ) Just download and drop the unzipped “SVN Cleanup.cmd” file into the root of the project, execute and away you go.  If you search around enough, I know you can find similar scripts and approaches elsewhere, but I’m still uploading my script for completeness and future reference. Download SVN Cleanup

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Forced Learning

    - by Ben Griswold
    If you ask me, it can be a little intimidating to stand in front of a group and walkthrough anything remotely technical. Even if you know “Technical Thingy #52” inside and out, public speaking can be unsettling.  And if you don’t have your stuff together, well, it can be downright horrifying. With that said, if given the choice, I still like to schedule myself to present on unfamiliar topics. Over the past few months, I’ve talked about Aspect-Oriented Programming, Functional Programming, Lean Software Development and Kanban Systems, Domain-Driven Design and Behavior Driven Development.  What do these topics have in common? You guessed it: I was truly interested in them. I had only a superficial understanding of each. Huh?  Why in the world would I ever want to to put myself in that intimidating situation? Actually, I rarely want to put myself into that situation but I often do as I like the results.  There’s nothing remotely clever going on here.  All I’m doing is putting myself into a compromising situation knowing that I’ll likely work myself out of it by learning the topic prior to the presentation.  I’m simply time-boxing myself to learn something new while knowing there are negative repercussions if I fall short. So, I end up doing tons of research and I learn bunches to ensure I have my head firmly wrap around the material before my talk. I’m not saying I become an expert overnight (or over a couple of weeks) but I’ll definitely know enough to be confident and comfortable and I’ll know more than enough to ensure the audience will learn a thing or two from me. It’s forced learning and though it might sound a little scary to some, it works for me. Now I could very easily rename this post to something like Fear Is My Motivator because, in a sense, fear of failure and embarrassment is what’s driving my learning. However, I’m the guy signing up for the presentation and since the entire process is self-imposed I’m not sure Fear deserves too much credit.  Anyway…

    Read the article

  • upstart-supervised init script for Apache?

    - by Ben Williams
    I want to run apache on Ubuntu 10.04, and use the nice supervision stuff in upstart (I'm not just talking about the apache init script, but proper service supervision a la daemontools - which is to say, restarting apache when it dies, things like that). Does anyone have a running upstart config for supervising apache on ubuntu 10.04? The Googles have been no help to me, but it could be that my google-fu is weak.

    Read the article

  • Project Euler 15: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 15.  As always, any feedback is welcome. # Euler 15 # http://projecteuler.net/index.php?section=problems&id=15 # Starting in the top left corner of a 2x2 grid, there # are 6 routes (without backtracking) to the bottom right # corner. How many routes are their in a 20x20 grid? import time start = time.time() def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) rows, cols = 20, 20 print factorial(rows+cols) / (factorial(rows) * factorial(cols)) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • ReSharper File Location

    - by Ben Griswold
    By default, the ReSharper cache is stored in the solution folder.  It’s one extra folder and one extra .user file.  It’s no big deal but it does clutter up your solution a bit – especially since the files provide no real value. I prefer to store the ReSharper cache in the system Temp folder.  This setting is available by visiting ReSharper > Options > Environment > General. Just update where you’d like to store the ReSharper cache and you’re good to go.  Note, the .user file continues to linger around the solution folder but at least the _ReSharper.SolutionName folder is moved out of sight.

    Read the article

  • Project Euler 9: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 9.  As always, any feedback is welcome. # Euler 9 # http://projecteuler.net/index.php?section=problems&id=9 # A Pythagorean triplet is a set of three natural numbers, # a b c, for which, # a2 + b2 = c2 # For example, 32 + 42 = 9 + 16 = 25 = 52. # There exists exactly one Pythagorean triplet for which # a + b + c = 1000. Find the product abc. import time start = time.time() product = 0 def pythagorean_triplet(): for a in range(1,501): for b in xrange(a+1,501): c = 1000 - a - b if (a*a + b*b == c*c): return a*b*c print pythagorean_triplet() print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • google-chrome video application association

    - by Ben Lee
    Is there any way to tell google-chrome to launch video files of particular types in an external application (or even just to bring up the download box as if the type was un-handled), instead of showing the video inside the browser? Searching online, it seems that chrome is supposed to use xdg-mime for file associations, but apparently is ignoring this for video. For example, when I do: xdg-mime query default video/mpeg It returns dragonplayer.desktop. But when I click on a mpeg video link, chrome displays it internally instead of launching Dragon Player (if I double click on a mpeg file in my file manager, on the other hand, it does open Dragon Player). So is there a way to tell chrome to respect this setting, or another way to coax chrome into opening the file externally? If it matters, I'm running the latest version of google-chrome stable (not chromium) at the time of writing, v. 18.0.1025.151, on kubuntu 11.10.

    Read the article

  • Project Euler 5: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 5.  As always, any feedback is welcome. # Euler 5 # http://projecteuler.net/index.php?section=problems&id=5 # 2520 is the smallest number that can be divided by each # of the numbers from 1 to 10 without any remainder. # What is the smallest positive number that is evenly # divisible by all of the numbers from 1 to 20? import time start = time.time() def gcd(a, b): while b: a, b = b, a % b return a def lcm(a, b): return a * b // gcd(a, b) print reduce(lcm, range(1, 20)) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Project Euler 14: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 14.  As always, any feedback is welcome. # Euler 14 # http://projecteuler.net/index.php?section=problems&id=14 # The following iterative sequence is defined for the set # of positive integers: # n -> n/2 (n is even) # n -> 3n + 1 (n is odd) # Using the rule above and starting with 13, we generate # the following sequence: # 13 40 20 10 5 16 8 4 2 1 # It can be seen that this sequence (starting at 13 and # finishing at 1) contains 10 terms. Although it has not # been proved yet (Collatz Problem), it is thought that all # starting numbers finish at 1. Which starting number, # under one million, produces the longest chain? # NOTE: Once the chain starts the terms are allowed to go # above one million. import time start = time.time() def collatz_length(n): # 0 and 1 return self as length if n <= 1: return n length = 1 while (n != 1): if (n % 2 == 0): n /= 2 else: n = 3*n + 1 length += 1 return length starting_number, longest_chain = 1, 0 for x in xrange(1, 1000001): l = collatz_length(x) if l > longest_chain: starting_number, longest_chain = x, l print starting_number print longest_chain # Slow 31 seconds print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Building a common syntax and scoping framework.

    - by Ben DeMott
    Hello fellow programmers, I was discussing a project the other day with a colleague of mine and I was curious to see what others had to say or if such a thing already existed. Background There are many programming languages. There are many IDE's and source editors that highlight and edit source code. Following perfectly and exactly the rules of a language to present auto-complete options and understand scopes in the code is rather complex. This task is complex enough that most IDE's implement different source-editors as plugins that often re-implement the same features over and over but in a different way (netbeans). From what I can tell most IDE's and source editors re-implement parsers that use regular expressions, or some meta-syntax Naur Form to describe the languages grammer generically. These parsers are implemented over and over and over again. Question Has anyone attempted to unify or describe a set of features through an API and have a consistent interface to parsing various programming languages and dialects. I'm not describing an IDE - but a consistent API for any program to use to parse and obtain meta-information from the source code. I realize various programming languages offer many different features which are difficult to 'abstract' into a set of features, but I feel this would be a worthwhile venture. It seems to me that this could possibly allow the authors of interpreters to help maintain a central grammer intepreter for their language. the Python foundation could maintain the Python grammer api, ANSI the C grammer api, Oracle the Java grammer API, etc Example usage If this was API existed code documentation generators could theoretically work across all dialects and languages to some level. It wouldn't matter if your project used 5 different languages a single application could document all of them and the comments and doc-tags within. Has anyone attempted this comprehensively?

    Read the article

  • Language Club

    - by Ben Griswold
    We started a language club at work this week.  Thus far, we have a collective interest in a number of languages: Python, Ruby, F#, Erlang, Objective-C, Scala, Clojure, Haskell and Go. There are more but these 9 received the most votes. During the first few meetings we are going to determine which language we should tackle first. To help make our selection, each member will provide a quick overview of their favored language by answering the following set of questions: Why are you interested in learning “your” language(s). (There’s lots of work, I’m an MS shill, It’s hip and  fun, etc) What type of language is it?  (OO, dynamic, functional, procedural, declarative, etc) What types of problems is your language best suited to solve?  (Algorithms over big data, rapid application development, modeling, merely academic, etc) Can you provide examples of where/how it is being used?  If it isn’t being used, why not?  (Erlang was invented at Ericsson to provide an extremely fault tolerant, concurrent system.) Quick history – Who created/sponsored the language?  When was it created?  Is it currently active? Does the language have hardware support (an attempt was made at one point to create processor instruction sets specific to Prolog), or can it run as an interpreted language inside another language (like Ruby in the JVM)? Are there facilities for programs written in this language to communicate with other languages?  How does this affect its utility? Does the language have a IDE tool support?  (Think Eclipse or Visual Studio) How well is the language supported in terms of books, community and documentation? What’s the number one things which differentiates the language from others?  (i.e. Why is it cool?) How is the language applicability to us as consultants?  What would the impact be of using the language in terms of cost, maintainability, personnel costs, etc.? What’s the number one things which differentiates the language from others?  (i.e. Why is it cool?) This should provide an decent introduction into nearly a dozen languages and give us enough context to decide which single language deserves our undivided attention for the weeks to come.  Stay tuned for the winner…

    Read the article

  • Remove Grub Loader from Mac

    - by ben
    I installed Ubuntu (Precise) on my Macbook Pro but now I'd like to go back to OSX but I can't boot off the OSX Snow Leopard DVD to do a reinstall. I have tried booting and holding down "c" or using "Option" and then selecting the OSX install media but after selecting the OSX media the grub menu loads and tried to boot Ubuntu instead of booting off the DVD. I tried booting off my Ubuntu LiveUSB and removing all of the partitions using gparted but the problem still persists. Any ideas? I just want to wipe everything and go back to OSX only. When I installed Ubuntu I pretty much followed the default options. Thanks.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Project Euler 8: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 8.  As always, any feedback is welcome. # Euler 8 # http://projecteuler.net/index.php?section=problems&id=8 # Find the greatest product of five consecutive digits # in the following 1000-digit number import time start = time.time() number = '\ 73167176531330624919225119674426574742355349194934\ 96983520312774506326239578318016984801869478851843\ 85861560789112949495459501737958331952853208805511\ 12540698747158523863050715693290963295227443043557\ 66896648950445244523161731856403098711121722383113\ 62229893423380308135336276614282806444486645238749\ 30358907296290491560440772390713810515859307960866\ 70172427121883998797908792274921901699720888093776\ 65727333001053367881220235421809751254540594752243\ 52584907711670556013604839586446706324415722155397\ 53697817977846174064955149290862569321978468622482\ 83972241375657056057490261407972968652414535100474\ 82166370484403199890008895243450658541227588666881\ 16427171479924442928230863465674813919123162824586\ 17866458359124566529476545682848912883142607690042\ 24219022671055626321111109370544217506941658960408\ 07198403850962455444362981230987879927244284909188\ 84580156166097919133875499200524063689912560717606\ 05886116467109405077541002256983155200055935729725\ 71636269561882670428252483600823257530420752963450' max = 0 for i in xrange(0, len(number) - 5): nums = [int(x) for x in number[i:i+5]] val = reduce(lambda agg, x: agg*x, nums) if val > max: max = val print max print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Managing .NET External Dependencies

    - by Ben Griswold
    Noah and I continue our screencast series by sharing our approach to managing external dependencies referenced within a .NET solution.  This is another introductory episode but you might find a hidden gem in the short 4-minute clip.  ELMAH (Error Logging Modules and Handlers) is the external dependencies we are focusing on in the presentation.  If you are not familiar with ELMAH, this episode may be worth your time.   YouTube - Managing .NET External Dependencies This is one of our first screencasts.  If you have feedback, I’d love to hear it.

    Read the article

  • Problem with apt-get update on ubuntu server

    - by Ben
    I have ubuntu 32 bit 12.04 server edition but i am having issues with apt-get updating. The server is able to ping the outside world (both 8.8.8.8 and www.google.com) but when I attempt to sudo apt-get update the first bit works but then I get this: If i attempt to check or autoclean with: sudo apt-get check I get this: I have tried the fix detailed here: How do I fix a "Problem with MergeList" error when trying to do an update? But it does not work, the same thing occurs again. Any suggestions? PS. The server is a fresh install, no external programs have been added. EDIT: We have tried using different sources, both aarnet and au.archive sources were tried.

    Read the article

  • Issue with multiplayer interpolation

    - by Ben Cracknell
    In a fast-paced multiplayer game I'm working on, there is an issue with the interpolation algorithm. You can see it clearly in the image below. Cyan: Local position when a packet is received Red: Position received from packet (goal) Blue: Line from local position to goal when packet is received Black: Local position every frame As you can see, the local position seems to oscillate around the goals instead of moving between them smoothly. Here is the code: // local transform position when the last packet arrived. Will lerp from here to the goal private Vector3 positionAtLastPacket; // location received from last packet private Vector3 goal; // time since the last packet arrived private float currentTime; // estimated time to reach goal (also the expected time of the next packet) private float timeToReachGoal; private void PacketReceived(Vector3 position, float timeBetweenPackets) { positionAtLastPacket = transform.position; goal = position; timeToReachGoal = timeBetweenPackets; currentTime = 0; Debug.DrawRay(transform.position, Vector3.up, Color.cyan, 5); // current local position Debug.DrawLine(transform.position, goal, Color.blue, 5); // path to goal Debug.DrawRay(goal, Vector3.up, Color.red, 5); // received goal position } private void FrameUpdate() { currentTime += Time.deltaTime; float delta = currentTime/timeToReachGoal; transform.position = FreeLerp(positionAtLastPacket, goal, currentTime / timeToReachGoal); // current local position Debug.DrawRay(transform.position, Vector3.up * 0.5f, Color.black, 5); } /// <summary> /// Lerp without being locked to 0-1 /// </summary> Vector3 FreeLerp(Vector3 from, Vector3 to, float t) { return from + (to - from) * t; } Any idea about what's going on?

    Read the article

  • Project Euler 52: Ruby

    - by Ben Griswold
    In my attempt to learn Ruby out in the open, here’s my solution for Project Euler Problem 52.  Compared to Problem 51, this problem was a snap. Brute force and pretty quick… As always, any feedback is welcome. # Euler 52 # http://projecteuler.net/index.php?section=problems&id=52 # It can be seen that the number, 125874, and its double, # 251748, contain exactly the same digits, but in a # different order. # # Find the smallest positive integer, x, such that 2x, 3x, # 4x, 5x, and 6x, contain the same digits. timer_start = Time.now def contains_same_digits?(n) value = (n*2).to_s.split(//).uniq.sort.join 3.upto(6) do |i| return false if (n*i).to_s.split(//).uniq.sort.join != value end true end i = 100_000 answer = 0 while answer == 0 answer = i if contains_same_digits?(i) i+=1 end puts answer puts "Elapsed Time: #{(Time.now - timer_start)*1000} milliseconds"

    Read the article

  • How do I simulate overprinting in Adobe Reader?

    - by Ben Everard
    On both Mac and Windows when I print a document there is an advanced screen that allows me to select an option called Simulate Overprinting, however such an option doesn't appear on the Ubuntu version. Wikipedia on overprinting: Overprinting refers to the process of printing one colour on top of another in reprographics. This is closely linked to the reprographic technique of 'trapping'. Another use of overprinting is to create a rich black (often regarded as a colour that is "blacker than black") by printing black over another dark colour. This is an issue for us, as we're trying to print documents that need flattening (this is what overprinting does). Am I missing something here, is there a way to enable overprinting on printed PDFs? Note: Please don't confuse simulate overprinting with overprint preview, of which doesn't apply when printing. Just to show you what I'm looking for, this is the Print > Advanced screen... And this is what I see on the Ubuntu screen, not no option for overprinting

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >