Search Results

Search found 31491 results on 1260 pages for 'simple talk'.

Page 130/1260 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Defining .NET Components with Namespaces

    A .NET software component is a compiled set of classes that provide a programmable interface that is used by consumer applications for a service. As a component is no more than a logical grouping of classes, what then is the best way to define the boundaries of a component within the .NET framework? How should the classes inter-operate? Patrick Smacchia, the lead developer of NDepend, discusses the issues and comes up with a solution.

    Read the article

  • Social Networking at Professional Events

    Dr. Masha Petrova compresses, into a small space, much good advice on networking with other professional people. She draws from her own experience as a technical expert to provide a detailed checklist of things you should and shouldn't do at conferences or tradeshows to be a successful 'networker'. As usual, she delivers sage advice with a dash of humour.

    Read the article

  • Backing up Exchange 2010 For Free

    It's hardly surprising that many SysAdmins are willing to pay over the odds for sophisticated backup solutions which they don't necessarily need, just to make sure their data is safe ASAP. Thankfully, Antoine Khater is here to give you a short and sweet walkthrough on how to keep your Exchange 2010 Server backed up for free. And the best news? You've already got everything you need.

    Read the article

  • TortoiseSVN and Subversion Cookbook Part 3: In, Out, and Around

    Subversion doesn't have to be difficult, especially if you have Michael Sorens's guide at hand. After dealing in previous articles with checkouts and commits in Subversion, and covering the various file-manipulation operations that are required for Subversion, Michael now deals in this article with file macro-management, the operations such as putting things in, and taking things out, that deal with repositories and projects.

    Read the article

  • Antenna Aligner Part 6: Little Robots

    - by Chris George
    A week ago I took temporary ownership of a HTC Desire S so that I could start testing my app under Android. Support for Android was not in my original plan, but when Nomad added support for it recently, I starting thinking why not! So with some trepidation, I clicked the Build for Android button on the Nomad toolbar... nothing. Hmm... that's not right, I was expecting something to build. After a bit of faffing around I finally realised that I hadn't read the text on the Android setup page properly (yes that's right, RTFM!), and I needed a two-part application identifier, separated by a dot. I did this (not sure what the two part thing is all about, that one my list to investigate!) After making the change, the Android build worked and created the apk file. I uploaded this to the device and nervously ran it... it worked!!!  Well, more or less! So, there was not splash screen, but this was no surprise because I only have the iOS icons and splash screen in my project at the moment. What was more concerning was the compass update didn't seem to be working. I suspect this is a result of using an iOS specific option in the Phonegap compass watcher. Another thing to investigate. I've also just noticed that the css gradient background hasn't worked either... These issues aside, it was actually more successful than I was expecting, so happy days! Right, lets get Googling...   Next time: Preparing for submission to the App Store! :-)

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Improving Comparison Operators and Window Functions

    It is dangerous to assume that your data is sound. SQL already has intrinsic ways to cope with missing, or unknown data in its comparison predicate operators, or Theta operators. Can SQL be more effective in the way it deals with data quality? Joe Celko describes how the SQL Standard could soon evolve to deal with data in ways that allow aggregation and windowing in cases where the data quality is less than perfect

    Read the article

  • Creating WPF Prototypes with SketchFlow

    Prototyping with Sketchflow transforms what was once a frustrating and time-consuming chore. With SketchFlow, WPF prototypes can be created and changed with amazing ease. SketchFlow is WPF's secret weapon. Well, it was secret until Michael Sorens produced this article.

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • A TDD Journey: 3- Mocks vs. Stubs; Test Frameworks; Assertions; ReSharper Accelerators

    Test-Driven Development (TDD) involves the repetition of a very short development cycle that begins with an initially-failing test that defines the required functionality, and ends with producing the minimum amount of code to pass that test, and finally refactoring the new code. Michael Sorens continues his introduction to TDD that is more of a journey in six parts, by implementing the first tests and introducing the topics of Test doubles; Test Runners, Constraints and assertions

    Read the article

  • Essential Tools for the WPF Novice

    When Michael sets out to do something, there are no half-measures; So when he set out to learn WPF, we all stand to benefit from the thorough research that he put into the task. He wondered what utility applications could assist programming in WPF. Here are the fruits of all his work.

    Read the article

  • What Counts for a DBA: Passion

    - by drsql
    One of my first questions, when interviewing for a DBA/Programmer position, is always: “Why do you want this job?” The answers I receive range from cheesy hyperbole (“I want to enhance your services with my vast knowledge”) to deadpan realism (“I have N kids who all have a hole in the front of their face where food goes"). Both answers are fine in their own way, at least displaying some self-confidence, humour and honesty, but once in a while, I'll hear the answer that is music to me ears... “I LOVE DATABASES!” Whenever I hear it, my nerves tingle in hopeful anticipation; have I found someone for whom working with database isn't just a job, but a passion? Inevitably, I'm often disappointed. What initially seemed like passion turns out to be rather shallow enthusiasm; the person is enthusiastic about working with databases in the same way he or she might be about eating a bag of Cajun spiced kettle chips; enjoyable, but not something to think about too deeply or take too seriously. Enthusiasm comes, and enthusiasm goes. I've seen countless technical forum users burst onto the scene in a blaze of frantic question-answering, only to fade away within days, never to be heard from again. Passion, however, is more of a longstanding commitment. The biographies of the great technologists and authors of the recent past are full of the sort of passion and engrossment that lead a person to write a novel non-stop for a fortnight with no sleep and only dog food to eat (Philip K. Dick), or refuse to leave the works of the first tunnel under the Thames, even though it was flooded (Brunel). In a similar (though more modest) way, my passion for working with databases has led me to acts that might cause someone for whom it was "just a job" to roll their eyes in disbelief. Most evenings you're more likely to find me reading a database book than watching TV. I've spent hundreds of hours of my spare time writing blogs and articles (some of which are only read by tens of people); I've spent hundreds of dollars travelling to conferences, paying my own flight and hotel expenses, so that I can share a little of what I know, and mix with some like-minded people. And I know I'm far from alone in this, in the SQL Server community. Passion isn't everything, of course, and it isn't always accompanied by any great skill, but in almost every case, that skill can be cultivated over time. If you are doing what you are passionate about, work turns into more than just a way to feed your kids; it becomes your hobby, entertainment, and preoccupation. And it is this passion that gives a DBA the obsessive stubbornness, the refusal to be beaten by even the most difficult problem, which is often so crucial. A final word of warning though: passion without limits can turn weird. Never let it get in the way of your wife, kids, bills, or personal hygiene.

    Read the article

  • Resolving an App-Relative URL without a Page Object Reference

    - by Damon
    If you've worked with ASP.NET before then you've almost certainly seen an application-relative URL like ~/SomeFolder/SomePage.aspx.  The tilde at the beginning is a stand in for the application path, and it can easily be resolved using the Page object's ResolveUrl method: string url = Page.ResolveUrl("~/SomeFolder/SomePage.aspx"); There are times, however, when you don't have a page object available and you need to resolve an application relative URL.  Assuming you have an HttpContext object available, the following method will accomplish just that: public static string ResolveAppRelativeUrl(string url) {      return url.Replace("~", System.Web.HttpContext.Current.Request.ApplicationPath); } It just replaces the tilde with the application path, which is essentially all the ResolveUrl method does.

    Read the article

  • An experiment: unlimited free trial

    - by Alex.Davies
    The .NET Demon team have just implemented an experiment that is quite a break from Red Gate's normal business model. Instead of the tool expiring after the trial period, it now continues to work, but with a new message that appears after the tool has saved you a certain amount of time. The rationale is that a user that stops using .NET Demon because the trial expired isn't doing anyone any good. We'd much rather people continue using it forever, as long as everyone that finds it useful and can afford it still pays for it. Hopefully the message appearing is annoying enough to achieve that, but not for people to uninstall it. It's true that many companies have tried it before with mixed results, but we have a secret weapon. The perfect nag message? The neat thing for .NET Demon is that we can easily measure exactly how much time .NET Demon has saved you, in terms of unnecessary project builds that Visual Studio would have done. When you press F5, the message shows you the time saved, and then makes you wait a shorter time before starting your application. Confronted with the truth about how amazing .NET Demon is, who can do anything but buy it? The real secret though, is that while you wait, .NET Demon gives you entertainment, in the form of a picture of a cute kitten. I've only had time to embed one kitten so far, but the eventual aim is for a random different kitten to appear each time. The psychological health benefits of a dose of kittens in the daily life of the developer are obvious. My only concern is that people will complain after paying for .NET Demon that the kittens are gone.

    Read the article

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • Antenna Aligner Part 8: It's Alive!!!

    - by Chris George
    Finally the day has come, Antenna Aligner v1.0.1 has been uploaded to the AppStore and . "Waiting for review" .. . fast forward 7 days and much checking of emails later WOO HOO! Now what? So I set my facebook page to go live  https://www.facebook.com/AntennaAligner, and started by sending messages to my mates that have iphones! Amazingly a few of them bought it! Similarly some of my colleagues were also kind enough to support me and downloaded it too! Unfortunately the only way I knew they had bought is was from them telling me, as the iTunes connect data is only updated daily at about midday GMT. This is a shame, surely they could provide more granular updates throughout the day? Although I suppose once an app has been out in the wild for a while, daily updates are enough. It would, however, be nice to get a ping when you make your first sale! I would have expected more feedback on my facebook page as well, maybe I'm just expecting too much, or perhaps I've configured the page wrong. The new facebook timeline layout is just confusing, and I'm not sure it's all public, I'll check that! So please take a look and see what you think! I would love to get some more feedback/reviews/suggestions... Oh and watch out for the Android version coming soon!

    Read the article

  • Windows 8 and the future of Silverlight

    - by Laila
    After Steve Ballmer's indiscrete 'MisSpeak' about Windows 8, there has been a lot of speculation about the new operating system. We've now had a few glimpses, such as the demonstration of 'Mosh' at the D9 2011 conference, and the Youtube video, which showed a touch-centric new interface for apps built using HTML5 and JavaScript. This has caused acute anxiety to the programmers who have followed the recommended route of WPF, Silverlight and .NET, but it need not have caused quite so much panic since it was, in fact, just a thin layer to make Windows into an apparently mobile-friendly OS. More worryingly, the press-release from Microsoft was at pains to say that 'Windows 8 apps use the power of HTML5, tapping into the native capabilities of Windows using standard JavaScript and HTML', as if all thought of Silverlight, dominant in WP7, had been jettisoned. Ironically, this brave new 'happening' platform can all be done now in Windows 7 and an iPad, using Adobe Air, so it is hardly cutting-edge; in fact the tile interface had a sort of Retro-Zune Metro UI feel first seen in Media Centre, followed by Windows Phone 7, with any originality leached out of it by the corporate decision-making process. It was kinda weird seeing old Excel running alongside stodgily away amongst all the extreme paragliding videos. The ability to snap and resize concurrent apps might be a novelty on a tablet, but it is hardly so on a PC. It was at that moment that it struck me that here was a spreadsheet application that hadn't even made the leap to the .NET platform. Windows was once again trying to be all things to all men, whereas Apple had carefully separated Mac OS X development from iOS. The acrobatic feat of straddling all mobile and desktop devices with one OS is looking increasingly implausible. There is a world of difference between an operating system that facilitates business procedures and a one that drives a device for playing pop videos and your holiday photos. So where does this leave Silverlight? Pretty much where it was. Windows 8 will support it, and it will continue to be developed, but if these press-releases reflect the thinking within Microsoft, it is no longer seen as the strategic direction. However, Silverlight is still there and there will be a whole new set of developer APIs for building touch-centric apps. Jupiter, for example, is rumoured to involve an App store that provides new, Silverlight based "immersive" applications that are deployed as AppX packages. When the smoke clears, one suspects that the Javascript/HTML5 is merely an alternative development environment for Windows 8 to attract the legions of independent developers outside the .NET culture who are unlikely to ever take a shine to a more serious development environment such as WPF or Silverlight. Cheers, Laila

    Read the article

  • 48hrs in Cambridge.

    - by Fatherjack
    In just over 2 weeks something pretty big in the SQL Server Community in the UK is taking place. We are going to witness the first SQL Saturday on these shores. The event is running in Cambridge, the home of the SQL Cambs user group and the chapter leader there (Mark Broadbent) is the lead on the SQL Saturday event too. Mark and his team are making final preparations and looking forward to this event getting started with the Pre-Con day on Friday 7th Sept. They have 3 great sessions from Buck Woody, Jen Stirrup and Mark Rasmussen for those lucky enough to be able to attend on the Friday. There are over 30 speakers providing 4 tracks of sessions on the Saturday so there will be plenty to interest and inform anyone working with SQL Server, take a look at all the sessions on the schedule. In addition to all of this you will be able to spend some quality time talking to all the other attendees, sponsors and PASS representatives to make the most of your time there. If you haven’t registered yet then head over to http://sqlcambs.org.uk/ and get your name down to attend this milestone event.

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • What Counts for a DBA: Humility

    - by drsql
    In football (the American sort, naturally,) there are a select group of players who really hope to never have their names called during the game. They are members of the offensive line, and their job is to protect other players so they can deliver the ball to the goal to score points. When you do hear their name called, it is usually because they made a mistake and the player that they were supposed to protect ended up flat on his back admiring the clouds in the sky instead of advancing towards the goal to scoring point. Even on the rare occasion their name is called for a good reason, it is usually because they were making up for a teammate who had made a mistake and they covered up for them. The role of offensive lineman is a very good analogy for the role of the admin DBA. As a DBA, you are called on to be barely visible and rarely heard, protecting the company data assets tenaciously, even though the enemies to our craft surround us on all sides:. Developers: Cries of ‘foul!’ often ensue when the DBA says that they want data integrity to be stringently enforced and that documentation is needed so they can support systems, mostly because every error occurrence in the enterprise will be initially blamed on the database and fall to the DBA to troubleshoot. Insisting too loudly may bring those cries of ‘foul’ that somewhat remind you of when your 2 year old daughter didn't want to go to bed. The result of this petulance is that the next "enemy" gets involved. Managers: The concerns that motivate DBAs to argue will not excite the kind of manager who gets his technical knowledge from a glossy magazine filled with buzzwords, charts, and pretty pictures. However, the other programmers in the organization will tickle the buzzword void with a stream of new-sounding ideas and technologies constantly, along with warnings that if we did care about data integrity and document things, the budget would explode! In contrast, the arguments for integrity of data and supportability tend to be about as exciting as watching grass grow, and far too many manager types seem to prefer to smoke it than watch it. Packaged Applications: The DBA is rarely given a chance to review a new application that is being demonstrated for the enterprise, and rarer still is the DBA that gets a veto of an application because the database it uses has clearly been created by an architect that won't read a data modeling book because he is already married. More often than not this leads to hours of work for the DBA trying to performance-tune a database with a menagerie of rules that must be followed to stay within the  application support agreement, such as no changing indexes on a third party schema even though there are 10 billion rows instead of the 10 thousand when the system was last optimized. Hardware Failures: Physical disks, networking devices, memory, and backup devices all come with a measure known as ‘mean time before failure’ and it is never listed in centuries or eons. More like years, and the term ‘mean’ indicates that half of the devices are expected to fail before that, which by my calendar means any hour of any day that it wants to fail it will. But the DBA sucks it up and does the task at hand with a humility that makes them nearly invisible to all but the most observant person in the organization. The best DBAs I know are so proactive in their relentless pursuit of perfection that they detect many of the bugs (which they seldom caused) in the system well before they become a problem. In the end the DBA gets noticed for one of same two reasons as the offensive lineman. You make a mistake, like dropping a critical production database that had never been backed up; or when a system crashes for any reason whatsoever and they are on the spot with troubleshooting and system restoration plans that have been well thought out, tested, and tested again. Not because there is any glory in it, but because it is what they do.   Note: The characteristics of the professions referred to in this blog are meant to be overstated stereotypes for humorous effect, and even some DBAs aren't quite this perfect. If you are reading this far and haven’t hand written a 10 page flaming comment about how you are a _______ and you aren’t like this, that is awesome. Not every situation applies to everyone, but if you have never worked with a bad packaged app, a magazine trained manager, programmers that aren’t team players, or hardware that occasionally failed, relax and go have a unicorn sandwich before you wake up.

    Read the article

  • SSIS and Parallelism: The Unseen Minions

    Sometimes, a procedural database process cannot easily be reduced to a set-based algorithm in order to reduce the time it takes. Then, you have to find other ways to parallelise it. Other ways? Josef shows how to use SSIS to drastically reduce the time that such a process takes.

    Read the article

  • Active Directory Snapshots with Windows Server 2008

    Snapshots are a useful feature of Windows Server 2008. Taking a snapshot of Active Directory as a scheduled task can prove to be a wise precaution in case disaster strikes. Once they are mounted, they can be accessed by any LDAP tool which allows the user to specify a host name and port number. Ben Lye shows how you can restore attributes to a large numbers of broken distribution groups from a snapshot.

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we're working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It's completely free (for now), so check it out. We're still working on it, and we're eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you're a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We'll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >