Search Results

Search found 3797 results on 152 pages for 'talk'.

Page 20/152 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • Do your good deed for the day: nominate an exceptional DBA

    - by Rebecca Amos
    Do you know an exceptional DBA? Think they deserve recognition at the world’s largest technical SQL Server conference? Nominate them for the Exceptional DBA Award 2011, and they could be accepting the prize at this year’s PASS Summit. Hard-working DBAs are crucial to the smooth-running of the companies they work for, so we want you to help us celebrate their achievements. Nominating someone for the Exceptional DBA Award simply involves answering a few questions about the nominee’s achievements and experience as a DBA, activities they’re involved in within the SQL Server community, and any mistakes they might have made along the way (we’ve all made them!), and how they handled them. They could win full conference registration to the PASS Summit (where the Award will be presented), and a copy of Red Gate’s SQL DBA Bundle. And you’ll have the feel-good satisfaction of knowing that you’ve helped a colleague or friend get the recognition they deserve (they’ll probably owe you a drink or two, too…). So do your good deed for the day: have a look at our website for all the info, and get started on your nomination: www.exceptionaldba.com

    Read the article

  • Web Deployment Made Awesome: If You're Using XCopy, You're Doing It Wrong

    - by The Official Microsoft IIS Site
    I did three talks at Mix 10 this year, and I'm going to do blog posts for each one, sharing what I talked about and some code if it's useful. I did a talk on Deployment called " Web Deployment Made Awesome: If You're Using XCopy, You're Doing It Wrong ." You can download the talk here, or watch it online : VIDEO Download: MP4 Video , Windows Media Video , Windows Media Video (High) I always try to sneak cooler titles into conferences if I can. It's better than "...(read more)

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • .NET Reflector & .NET Reflector Pro 6.1 have been released

    - by Bart Read
    .NET Reflector 6.1 and .NET Reflector Pro 6.1 have been released. You can download them from: http://www.red-gate.com/products/reflector/index.htm .NET Reflector is a class browser and disassembler for .NET assemblies. .NET Reflector Pro is a Visual Studio debugging extension that allows you to step through third party and framework assemblies, as if they were built from your own source code. This release fixes several problems that were present in the 6.0 release: Support for using a copy of Reflector.cfg stored alongside Reflector.exe has been re-enabled so users upgrading from 5.x releases will not lose their settings. Fixed unhandled exception on exit of Visual Studio when .NET Reflector add-in used in conjunction with TestDriven.NET add-in. Added better support for dealing with framework assemblies, which only contain meta-data, in the "Referenced Assemblies" folder. Fixed problem where attempted decompilation with CppCliLanguage add-in would lead to display of a page on the Red Gate website. Added option to activate .NET Reflector Pro to .NET Reflector menu in Visual Studio after receiving feedback from a number of users that it was hard to figure out how to activate the product. For more details about the products please visit: http://www.red-gate.com/products/reflector/index.htm

    Read the article

  • New site – and a special offer

    - by Red Gate Software BI Tools Team
    SSAS Compare has a brand new website! The old page was thrown together in the way that most Red Gate labs sites tend to be — as experimental sites for experimental products. We’ve been developing SSAS Compare for a while now, so we decided it was time for something a bit prettier. The new site is mostly the work of Andrew, our marketing manager, who has all sorts of opinions about websites. One of the opinions Andrew has is that his photo should be on every site on the internet, or at least every Red Gate site on the internet, and that’s why his handsome visage now appears on the SSAS Compare page. Well, that isn’t quite true. According to Andrew, people download more software when they have photos of human beings to look at. We want as many people to try SSAS Compare as possible, so we got the team together for an intimate photoshoot directed by Red Gate’s resident recorder of light, Dom Reed (aka Mr Flibble). The photo will appear on the site as soon as Dom is finished photoshopping us into something more palatable, which is a big job. Until then, you’ll have to put up with Andrew. We’ve also used the new site to announce a special offer. Right now, SSAS Compare is still a free beta, but by signing up to our Early Access Program, you’ll get a 20% discount when we release SSAS Compare as a fully-fledged product. We’ll use your email address to send you news and updates about business intelligence tools from Red Gate (and nothing else). If that sounds good to you, go to the SSAS Compare site to sign up. By the way, the BI Tools team wasn’t the only thing Dom photographed last week. Remember Noemi’s blog about the flamenco dance? We’ll be at SQL Saturday in our home town of Cambridge this Saturday (8th September), handing out flyers of a distinctly Mediterranean flavour. If you’re attending, be sure to say hello!

    Read the article

  • Exceptional DBA 2011 Jeff Moden on why you should enter in 2012

    - by Red and the Community
    My "reign" as the Red Gate Exceptional DBA is almost over and I was asked to say a few words about this wonderful award. Having been one of those folks that shied away from entering the contest during the first 3 years of the award, I thought I’d spend the time encouraging DBAs of all types to enter. Winning this award has some obvious benefits. You win a trip to PASS including money towards your flight, paid hotel stay, and, of course, paid admission. You win a wonderful bundle of software from Red Gate to make your job as a DBA a whole lot easier. You also win some pretty incredible notoriety for your resume. After all, it’s not everyone who wins a worldwide contest. To date, there are only 4 of us in the world who have won this award. You could be number 5! For me, all of that pales in comparison to what I found out during the entry process. I’m very confident in my skills, but I’m also humble. It was suggested to me that I enter the contest when it first started. I just couldn’t bring myself to nominate myself. When the 2011 nomination period opened up, several people again suggested that I enter, so I swallowed hard and asked several co-workers to have a look at the online nomination form and, if they thought me worthy, to write a nomination for me. I won’t bore you with the details, but what they wrote about me was one of the most incredible rewards that I could ever have hoped to receive. I had no idea of the impact that I’d made on my co-workers. Even if I hadn’t made it to the top 5 for the award, I had already won something very near and dear that no one can ever top. “Even if I hadn’t made it to the top 5 for the award, I had already won something very near and dear that no one can ever top.” There’s only one named winner and 4 "runners up" in this competition every year but don’t let that discourage you. Enter this competition. Even if you work in the proverbial "Mom’n'Pop" shop, get your boss and the people you work with directly to nominate you. Even if you don’t make it to the top 5, you might just find out that you’re more of a winner than you think. If you’re too proud to ask them, then take the time to nominate yourself instead of shying away like I did for the first 3 years. You work hard as a DBA and, as David Poole once said, if you’re the first person that people ask for help rather than one of the last, then you’re probably an Exceptional DBA. It’s time to stand up and be counted! Win or lose, the entry process can be a huge reward in itself. It was for me. Thank you, Red Gate, for giving me such a wonderful opportunity. Thanks for listening folks and for all that you do as DBAs. As ‘Red Green’ says, "We’re all in this together and I’m pullin’ for ya". –Jeff Moden Red Gate Exceptional DBA 2011

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Ad-hoc taxonomy: owning the chess set doesn't mean you decide how the little horsey moves

    - by Roger Hart
    There was one of those little laugh-or-cry moments recently when I heard an anecdote about content strategy failings at a major online retailer. The story goes a bit like this: successful company in a highly commoditized marketplace succeeds on price and largely ignores its content team. Being relatively entrepreneurial, the founders are still knocking around, and occasionally like to "take an interest". One day, they decree that clothing sold on the site can no longer be described as "unisex", because this sounds old fashioned. Sad now. Let me just reiterate for the folks at the back: large retailer, commoditized market place, differentiating on price. That's inherently unstable. Sooner or later, they're going to need one or both of competitive differentiation and significant optimization. I can't speak for the latter, since I'm hypothesizing off a raft of rumour, but one of the simpler paths to the former is to become - or rather acknowledge that they are - a content business. Regardless, they need highly-searchable terminology. Even in the face of tooth and claw resistance to noticing the fundamental position content occupies in driving sales (and SEO) on the web, there's a clear information problem here. Dilettante taxonomy is a disaster. Ok, so this is a small example, but that kind of makes it a good one. Unisex probably is the best way of describing clothing designed to suit either men or women interchangeably. It certainly takes less time to type (and read). It's established terminology, and as a single word, it's significantly better for web readability than a phrasal workaround. Something like "fits men or women" is short, by could fall foul of clause-level discard in web scanning. It's not an adjective, so for intuitive reading it's never going to be near the start of a title or description. It would also clutter up search results, and impose cognitive load in list scanning. Sorry kids, it's just worse. Even if "unisex" were an archaism (which it isn't), the only thing that would weigh against its being more usable and concise terminology would be evidence that this archaism were hurting conversions. Good luck with that. We once - briefly - called one of our products a "Can of worms". It was a bundle in a bug-tracking suite, and we thought it sounded terribly cool. Guess how well that sold. We have information and content professionals for a reason: to make sure that whatever we put in front of users is optimised to meet user and business goals. If that thinking doesn't inform style guides, taxonomy, messaging, title structure, and so forth, you might as well be finger painting.

    Read the article

  • Subterranean IL: Filter exception handlers

    - by Simon Cooper
    Filter handlers are the second type of exception handler that aren't accessible from C#. Unlike the other handler types, which have defined conditions for when the handlers execute, filter lets you use custom logic to determine whether the handler should be run. However, similar to a catch block, the filter block does not get run if control flow exits the block without throwing an exception. Introducing filter blocks An example of a filter block in IL is the following: .try { // try block } filter { // filter block endfilter }{ // filter handler } or, in v1 syntax, TryStart: // try block TryEnd: FilterStart: // filter block HandlerStart: // filter handler HandlerEnd: .try TryStart to TryEnd filter FilterStart handler HandlerStart to HandlerEnd In the v1 syntax there is no end label specified for the filter block. This is because the filter block must come immediately before the filter handler; the end of the filter block is the start of the filter handler. The filter block indicates to the CLR whether the filter handler should be executed using a boolean value on the stack when the endfilter instruction is run; true/non-zero if it is to be executed, false/zero if it isn't. At the start of the filter block, and the corresponding filter handler, a reference to the exception thrown is pushed onto the stack as a raw object (you have to manually cast to System.Exception). The allowed IL inside a filter block is tightly controlled; you aren't allowed branches outside the block, rethrow instructions, and other exception handling clauses. You can, however, use call and callvirt instructions to call other methods. Filter block logic To demonstrate filter block logic, in this example I'm filtering on whether there's a particular key in the Data dictionary of the thrown exception: .try { // try block } filter { // Filter starts with exception object on stack // C# code: ((Exception)e).Data.Contains("MyExceptionDataKey") // only execute handler if Contains returns true castclass [mscorlib]System.Exception callvirt instance class [mscorlib]System.Collections.IDictionary [mscorlib]System.Exception::get_Data() ldstr "MyExceptionDataKey" callvirt instance bool [mscorlib]System.Collections.IDictionary::Contains(object) endfilter }{ // filter handler // Also starts off with exception object on stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) } Conclusion Filter exception handlers are another exception handler type that isn't accessible from C#, however, just like fault handlers, the behaviour can be replicated using a normal catch block: try { // try block } catch (Exception e) { if (!FilterLogic(e)) throw; // handler logic } So, it's not that great a loss, but it's still annoying that this functionality isn't directly accessible. Well, every feature starts off with minus 100 points, so it's understandable why something like this didn't make it into the C# compiler ahead of a different feature.

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • Anatomy of a serialization killer

    - by Brian Donahue
    As I had mentioned last month, I have been working on a project to create an easy-to-use managed debugger. It's still an internal tool that we use at Red Gate as part of product support to analyze application errors on customer's computers, and as such, should be easy to use and not require installation. Since the project has got rather large and important, I had decided to use SmartAssembly to protect all of my hard work. This was trivial for the most part, but the loading and saving of results was broken by SA after using the obfuscation, rendering the loading and saving of XML results basically useless, although the merging and error reporting was an absolute godsend and definitely worth the price of admission. (Well, I get my Red Gate licenses for free, but you know what I mean!)My initial reaction was to simply exclude the serializable results class and all of its' members from obfuscation, and that was just dandy, but a few weeks on I decided to look into exactly why serialization had broken and change the code to work with SA so I could write any new code to be compatible with SmartAssembly and save me some additional testing and changes to the SA project.In simple terms, SA does all that it can to prevent serialization problems, for instance, it will not obfuscate public members of a DLL and it will exclude any types with the Serializable attribute from obfuscation. This prevents public members and properties from being made private and having the name changed. If the serialization is done inside the executable, however, public members have the access changed to private and are renamed. That was my first problem, because my types were in the executable assembly and implemented ISerializable, but did not have the Serializable attribute set on them!public class RedFlagResults : ISerializable        {        }The second problem caused by the pruning feature. Although RedFlagResults had public members, they were not truly properties, and used the GetObjectData() method of ISerializable to serialize the members. For that reason, SA could not exclude these members from pruning and further broke the serialization. public class RedFlagResults : ISerializable        {                public List<RedFlag.Exception> Exceptions;                 #region ISerializable Members                 public void GetObjectData(SerializationInfo info, StreamingContext context)                {                                info.AddValue("Exceptions", Exceptions);                }                 #endregionSo to fix this, it was necessary to make Exceptions a proper property by implementing get and set on it. Also, I added the Serializable attribute so that I don't have to exclude the class from obfuscation in the SA project any more. The DoNotPrune attribute means I do not need to exclude the class from pruning.[Serializable, SmartAssembly.Attributes.DoNotPrune]        public class RedFlagResults        {                public List<RedFlag.Exception> Exceptions {get;set;}        }Similarly, the Exception class gets the Serializable and DoNotPrune attributes applied so all of its' properties are excluded from obfuscation.Now my project has some protection from prying eyes by scrambling up the code so it's harder to reverse-engineer, without breaking anything. SmartAssembly has also provided the benefit of merging so that the end-user doesn't need to extract all of the DLL files needed by RedFlag into a directory, and can be run directly from the .zip archive. When an error occurs (hey, I'm only human!), an exception report can be sent to me so I can see what went wrong without having to, er, debug the debugger.

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Ryan Weber On KCNext | #AJIReport

    - by Jeff Julian
    We sit down with Ryan Weber of KCNext in our office to talk about the Kansas City market for technology. The Technology Council of Greater Kansas City is committed to growing the existing base of technology firms, recruiting and attracting technology companies, aggregating and promoting our regional IT assets and providing peer interaction and industry news. During this show we talk about why KCNext is great for Kansas City. They offer some great networking and educational events, but also focus on connecting companies together to help build relationships on a business level. Make sure you visit their website to see what events are coming up and link up with them on Twitter to stay on top of news from the KC technology community. Listen to the Show Site: http://www.kcnext.com/ Twitter: @KCNext LinkedIn: KCNext - The Technology Council of Greater Kansas City

    Read the article

  • ANTS Memory Profiler 8 released!

    - by Ben Emmett
    I’m excited to say that we’ve just released ANTS Memory Profiler 8! The big news is support for profiling .NET’s usage of unmanaged memory. There are two main parts to this. Firstly you can see a breakdown of unmanaged memory usage by module. This lets you see at a high level where unmanaged memory is being used – for example in the image below, it’s being used by a PDF generation library. Separately, when looking at a list of .NET classes, you can see how much unmanaged memory those classes are responsible for holding on to. You can also see that information for individual instances of those classes. Some clues you might need this: You’re using system objects or 3rd party components which deal with unmanaged memory under the hood (this includes things like the GDI+ functions used for working with bitmaps) Your application still relies on some legacy Delphi / C++ / etc code from left over from the days before your company moved over to using .NET You’ve used a previous version of ANTS Memory Profiler, and have ever seen a pie chart that looks something like this: You’ll also notice that the startup process has been entirely redesigned, bringing it in line with ANTS Performance Profiler 8, which was released earlier in the year. This makes it faster to start profiling and to run repeat profiling sessions, lets you profile using any browser instead of Internet Explorer, and also provides a host of stability improvements, particularly when launching websites in IIS. Download the new version (there’s a free trial), and as always I’d love to know what you think – just email [email protected]. Cheers! Ben

    Read the article

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

  • Inside Red Gate - Introduction

    - by Simon Cooper
    I work for Red Gate Software, a software company based in Cambridge, UK. In this series of posts, I'll be discussing how we develop software at Red Gate, and what we get up to, all from a dev's perspective. Before I start the series proper, in this post I'll give you a brief background to what I have done and continue to do as part of my job. The initial few posts will be giving an overview of how the development sections of the company work. There is much more to a software company than writing the products, but as I'm a developer my experience is biased towards that, and so that is what this series will concentrate on. My background Red Gate was founded in 1999 by Neil Davidson & Simon Galbraith, who continue to be joint CEOs. I joined in September 2007, and immediately set to work writing a new Check for Updates client and server (CfU), as part of a team of 2. That was finished at the end of 2007. I then joined the SQL Compare team. The first large project I worked on was updating SQL Compare for SQL Server 2008, resulting in SQL Compare 7, followed by a UI redesign in SQL Compare 8. By the end of this project in early 2009 I had become the 'go-to' guy for the SQL Compare Engine (I'll explain what that means in a later post), which is used by most of the other tools in the SQL Tools division in one way or another. After that, we decided to expand into Oracle, and I wrote the prototype for what became the engine of Schema Compare for Oracle (SCO). In the latter half of 2009 a full project was started, resulting in the release of SCO v1 in early 2010. Near the end of 2010 I moved to the .NET division, where I joined the team working on SmartAssembly. That's what I continue to work on today. The posts in this series will cover my experience in software development at Red Gate, within the SQL Tools and .NET divisions. Hopefully, you'll find this series an interesting look at what exactly goes into producing the software at Red Gate.

    Read the article

  • GDC 2012: Best practices in developing a web game

    GDC 2012: Best practices in developing a web game (Pre-recorded GDC content) There's a new wave of console/pc/mobile game developers moving to the web looking to take advantage of the massive user base, along side of the powerful social graphs available there. The web as a platform is a very different technology stack than consoles / mobile, and as such, requires different development processes. This talk is targeted towards game developers who are looking to understand more about the development processes for web development including where to host your assets, proper techniques in caching to the persistant file store; dealing with sessions, storing user state, user login, game state storage, social graph integration, localization, audio, rendering, hardware detection and testing / distribution. If you're interested in developing a web game, you need to attend this talk! Speaker: Colt McAnlis From: GoogleDevelopers Views: 5149 131 ratings Time: 01:03:52 More in Science & Technology

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Exceptional DBA 2011 Jeff Moden on why you should enter in 2012

    - by RedAndTheCommunity
    My "reign" as the Red Gate Exceptional DBA is almost over and I was asked to say a few words about this wonderful award. Having been one of those folks that shied away from entering the contest during the first 3 years of the award, I thought I'd spend the time encouraging DBAs of all types to enter. Winning this award has some obvious benefits. You win a trip to PASS including money towards your flight, paid hotel stay, and, of course, paid admission. You win a wonderful bundle of software from Red Gate to make your job as a DBA a whole lot easier. You also win some pretty incredible notoriety for your resume. After all, it's not everyone who wins a worldwide contest. To date, there are only 4 of us in the world who have won this award. You could be number 5! For me, all of that pales in comparison to what I found out during the entry process. I'm very confident in my skills, but I'm also humble. It was suggested to me that I enter the contest when it first started. I just couldn't bring myself to nominate myself. When the 2011 nomination period opened up, several people again suggested that I enter, so I swallowed hard and asked several co-workers to have a look at the online nomination form and, if they thought me worthy, to write a nomination for me. I won't bore you with the details, but what they wrote about me was one of the most incredible rewards that I could ever have hoped to receive. I had no idea of the impact that I'd made on my co-workers. Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top. "Even if I hadn't made it to the top 5 for the award, I had already won something very near and dear that no one can ever top." There's only one named winner and 4 "runners up" in this competition every year but don't let that discourage you. Enter this competition. Even if you work in the proverbial "Mom'n'Pop" shop, get your boss and the people you work with directly to nominate you. Even if you don't make it to the top 5, you might just find out that you're more of a winner than you think. If you're too proud to ask them, then take the time to nominate yourself instead of shying away like I did for the first 3 years. You work hard as a DBA and, as David Poole once said, if you're the first person that people ask for help rather than one of the last, then you're probably an Exceptional DBA. It's time to stand up and be counted! Win or lose, the entry process can be a huge reward in itself. It was for me. Thank you, Red Gate, for giving me such a wonderful opportunity. Thanks for listening folks and for all that you do as DBAs. As 'Red Green' says, "We're all in this together and I'm pullin' for ya". --Jeff Moden Red Gate Exceptional DBA 2011

    Read the article

  • Music before bells and whistles

    - by Tony Davis
    Why is it that Windows has so much difficulty in finding content on its file system? This is not an insurmountable technical problem; on my laptop, I have a database within which I can instantly find text or names within millions of records, within 300 milliseconds. I have a copy of Google Desktop that can find phrases within emails or documents, almost as quickly. It is an important, though mundane, part of an operating system to be able to find files. The first thing I notice within Windows is that the facility to find files or text within files is called 'search' rather than 'find'. Hmm. This doesn’t bode well. What’s this? It does a brute-force search for file names? Here we are in an age when we can breed mice that glow in the dark, and manufacture computers that fit in our shirt pockets, and we find an operating system that is still entirely innocent of managing and indexing content in hierarchical data. I can actually read the files of my PC into a database, mimic the directory/folder hierarchies and then find files in a flash; but when I do the same with Windows Vista, we are suddenly back in a 1960s time warp. Finding files based on their name is bad enough, but finding files based on the content that they contain is more or less asking for an opportunity to wait 20 minutes in order to see a "file not found" message. Sadly, with Windows 7, Microsoft seems to have fallen into the familiar trap of adding bells and whistles before finishing the song. It's certainly true that Microsoft has added new features and a certain polish to Windows Search 4.0, the latest incarnation. It works more like a web search and offers a new search syntax, called Advanced Query Syntax, which allows you to search on file author, file size, date ranges (e.g. date:=7/4/09still does not work reliably. I've experienced first-hand its stubborn refusal, despite a full index, to acknowledge the existence of a file I know exists, based on a search for a specific term within that file that I know is in there somewhere; a file that Google Desktop search, or old wingrep, finds in seconds. When users hark back to the halcyon days of Windows XP search, you know something is seriously amiss. Shouldn't applications get the functionality right before applying animated menus and Teletubby graphics, or is advancing age making me grumpy? I’d be pleased to hear your views, as always. Cheers, Tony.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >