Search Results

Search found 31491 results on 1260 pages for 'simple talk'.

Page 122/1260 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Why not to use StackTrace to find what method called you

    - by Alex.Davies
    Our obfuscator, SmartAssembly, does some pretty crazy reflection. It's an obfuscator, it's sort of its job to do things in the most awkward way possible. But sometimes, you can go too far. One such time is this little gem from the strings encoding feature: StackTrace stackTrace = new StackTrace(); StackFrame frame = stackTrace.GetFrame(1); Type ownerType = frame.GetMethod().DeclaringType; It's designed to find the type where the calling method is defined. A user found that strings encoding occasionally broke on x64 systems. Very strange. After some debugging (thank god for Reflector Pro, it would be impossible to debug processed assemblies without it) I found that the ownerType I got back was wrong. The reason is that the x64 JIT does tail call optimisation. This saves space on the stack, and speeds things up, by throwing away a method's stack frame if the last thing that it calls is the only thing returned. When this happens, the call to StackTrace faithfully tells you that the calling method is the one that called the one we really wanted. So using StackTrace isn't safe for anything other than debugging, and it will make your code fail in unpredictable ways. Don't use it!

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

  • What do you call an obfuscator that isn't an obfuscator?

    - by Alex.Davies
    SmartAssembly, formerly {smartassembly}, version 5 is now available as an Early Access Build. You can get it here: http://www.red-gate.com/MessageBoard/viewforum.php?f=116 We're having second thoughts about the name change though. It isn't that we like the curly brackets, far from it. The trouble is that the first rule of product naming is to name a product by what it does. SmartAssembly may make an assembly smarter, but that's not something people really google for. The trouble is, I can't think of a better name for it. That's because SmartAssembly really does two completely separate things: Obfuscates Sets up your assembly for the awesome exception reports which get sent to you whenever your application crashes. You may have been (un?)lucky enough to see one in reflector if you use it. This is what those exception reports look like when they arrive back with the developer: Look at all those local variables! If you ask me, this is much cooler than the obfuscation. So obviously we don't want to call it just "Red Gate Obfuscator" or something, because it doesn't do justice to the exception reporting. What would you call it?

    Read the article

  • SortedDictionary and SortedList

    - by Simon Cooper
    Apart from Dictionary<TKey, TValue>, there's two other dictionaries in the BCL - SortedDictionary<TKey, TValue> and SortedList<TKey, TValue>. On the face of it, these two classes do the same thing - provide an IDictionary<TKey, TValue> interface where the iterator returns the items sorted by the key. So what's the difference between them, and when should you use one rather than the other? (as in my previous post, I'll assume you have some basic algorithm & datastructure knowledge) SortedDictionary We'll first cover SortedDictionary. This is implemented as a special sort of binary tree called a red-black tree. Essentially, it's a binary tree that uses various constraints on how the nodes of the tree can be arranged to ensure the tree is always roughly balanced (for more gory algorithmical details, see the wikipedia link above). What I'm concerned about in this post is how the .NET SortedDictionary is actually implemented. In .NET 4, behind the scenes, the actual implementation of the tree is delegated to a SortedSet<KeyValuePair<TKey, TValue>>. One example tree might look like this: Each node in the above tree is stored as a separate SortedSet<T>.Node object (remember, in a SortedDictionary, T is instantiated to KeyValuePair<TKey, TValue>): class Node { public bool IsRed; public T Item; public SortedSet<T>.Node Left; public SortedSet<T>.Node Right; } The SortedSet only stores a reference to the root node; all the data in the tree is accessed by traversing the Left and Right node references until you reach the node you're looking for. Each individual node can be physically stored anywhere in memory; what's important is the relationship between the nodes. This is also why there is no constructor to SortedDictionary or SortedSet that takes an integer representing the capacity; there are no internal arrays that need to be created and resized. This may seen trivial, but it's an important distinction between SortedDictionary and SortedList that I'll cover later on. And that's pretty much it; it's a standard red-black tree. Plenty of webpages and datastructure books cover the algorithms behind the tree itself far better than I could. What's interesting is the comparions between SortedDictionary and SortedList, which I'll cover at the end. As a side point, SortedDictionary has existed in the BCL ever since .NET 2. That means that, all through .NET 2, 3, and 3.5, there has been a bona-fide sorted set class in the BCL (called TreeSet). However, it was internal, so it couldn't be used outside System.dll. Only in .NET 4 was this class exposed as SortedSet. SortedList Whereas SortedDictionary didn't use any backing arrays, SortedList does. It is implemented just as the name suggests; two arrays, one containing the keys, and one the values (I've just used random letters for the values): The items in the keys array are always guarenteed to be stored in sorted order, and the value corresponding to each key is stored in the same index as the key in the values array. In this example, the value for key item 5 is 'z', and for key item 8 is 'm'. Whenever an item is inserted or removed from the SortedList, a binary search is run on the keys array to find the correct index, then all the items in the arrays are shifted to accomodate the new or removed item. For example, if the key 3 was removed, a binary search would be run to find the array index the item was at, then everything above that index would be moved down by one: and then if the key/value pair {7, 'f'} was added, a binary search would be run on the keys to find the index to insert the new item, and everything above that index would be moved up to accomodate the new item: If another item was then added, both arrays would be resized (to a length of 10) before the new item was added to the arrays. As you can see, any insertions or removals in the middle of the list require a proportion of the array contents to be moved; an O(n) operation. However, if the insertion or removal is at the end of the array (ie the largest key), then it's only O(log n); the cost of the binary search to determine it does actually need to be added to the end (excluding the occasional O(n) cost of resizing the arrays to fit more items). As a side effect of using backing arrays, SortedList offers IList Keys and Values views that simply use the backing keys or values arrays, as well as various methods utilising the array index of stored items, which SortedDictionary does not (and cannot) offer. The Comparison So, when should you use one and not the other? Well, here's the important differences: Memory usage SortedDictionary and SortedList have got very different memory profiles. SortedDictionary... has a memory overhead of one object instance, a bool, and two references per item. On 64-bit systems, this adds up to ~40 bytes, not including the stored item and the reference to it from the Node object. stores the items in separate objects that can be spread all over the heap. This helps to keep memory fragmentation low, as the individual node objects can be allocated wherever there's a spare 60 bytes. In contrast, SortedList... has no additional overhead per item (only the reference to it in the array entries), however the backing arrays can be significantly larger than you need; every time the arrays are resized they double in size. That means that if you add 513 items to a SortedList, the backing arrays will each have a length of 1024. To conteract this, the TrimExcess method resizes the arrays back down to the actual size needed, or you can simply assign list.Capacity = list.Count. stores its items in a continuous block in memory. If the list stores thousands of items, this can cause significant problems with Large Object Heap memory fragmentation as the array resizes, which SortedDictionary doesn't have. Performance Operations on a SortedDictionary always have O(log n) performance, regardless of where in the collection you're adding or removing items. In contrast, SortedList has O(n) performance when you're altering the middle of the collection. If you're adding or removing from the end (ie the largest item), then performance is O(log n), same as SortedDictionary (in practice, it will likely be slightly faster, due to the array items all being in the same area in memory, also called locality of reference). So, when should you use one and not the other? As always with these sort of things, there are no hard-and-fast rules. But generally, if you: need to access items using their index within the collection are populating the dictionary all at once from sorted data aren't adding or removing keys once it's populated then use a SortedList. But if you: don't know how many items are going to be in the dictionary are populating the dictionary from random, unsorted data are adding & removing items randomly then use a SortedDictionary. The default (again, there's no definite rules on these sort of things!) should be to use SortedDictionary, unless there's a good reason to use SortedList, due to the bad performance of SortedList when altering the middle of the collection.

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

  • What Counts For a DBA: Ego

    - by Louis Davidson
    Leaving aside, for a second, Freud’s psychoanalytical definitions, the term “ego” generally refers to a person’s sense of self, and their self-esteem. In casual usage, however, it usually appears in the adjectival form, “egotistical” (most often followed by “jerk”). You don’t need to be a jerk to be a DBA; humility is important. However, ego is important too. A good DBA needs a certain degree of self-esteem…a belief and pride in what he or she can do better than anyone else can. The ideal DBA needs to be humble enough to admit when they are wrong but egotistical enough to know when they are right, and to stand up for that knowledge and make their voice heard. In most organizations, the DBA team is seriously outnumbered by headstrong developers and clock driven managers, and “great” DBAs will often be outnumbered by…well…the not so great. In order to be heard in this environment, a DBA will not only need to be very skilled, but will also need a healthy dose of ego. As Freud might have put it, the unconscious desire of the DBA (the id) is for iron-fist control over their databases, and code that runs in them. However, the ego moderates this desire, seeking to “satisfy the id in realistic ways that, in the long term, bring benefit rather than grief“. In other words, the ego understands the need to exert a measure of control and self-belief, but also to tolerate and play nicely with developers and other DBAs. The trick, naturally, is learning how to be heard when it is important, but also to make everyone around you welcome that input, even when you have to be bold to make the “I know what I am talking about, and you…well…not so much” decisions. Consider a baseball team, bottom of the ninth inning of the championship game, man on first and down one run. Almost anyone on that team will have the ability to hit a home run, but only one or two will have the iron belief that they can pull it off in this critical, end-game situation. The player you need in this situation is the one who has passionately gone the extra mile preparing for just this moment, is bursting at the seams with self-confidence, and can look the coach in the eye and state, boldly, “Put me in, I am your best bet“. Likewise, on those occasions when high customer demand coincides with copious system errors, and panic is bubbling just beneath the surface, you don’t need the minimally qualified support person, armed with the “reboot and hope” technique (though that sometimes works!). You need the DBA who steps up and says, “Put me in” and has the skill and tenacity to back up those words and to fix the pinpoint and fix the problem, whatever it takes, while keeping customers and managers happy. Of course, the egotistical DBA will happily spend hours telling you how great they are at their job, and how brilliantly they put out a previous fire, and this is no guarantee that they can deliver. However, if an otherwise-humble DBA looks you in the eye and says, “I can do it”, then hear them out. Sometimes, this burst of ego will be exactly what’s required.

    Read the article

  • .NET Reflector Pro to the rescue

    Almost all applications have to interface with components or modules written by somebody else, for which you don't have the source code. This is fine until things go wrong, but when you need to refactor your code and you keep getting strange exceptions, you'll start to wish you could place breakpoints in someone else's code and step through it. Now, of course, you can, as Geoffrey Braaf discovered.

    Read the article

  • Using SQL Sentry Plan Explorer

    - by fatherjack
    LiveJournal Tags: How To,SSMS,Tips and tricks,Execution Plans This is a quick tip that I hope will help you use SQL Sentry's Plan Explorer tool. It's a really great tool for viewing Execution Plans - something that SSMS isn't too great at. If you don't have the tool then you can download it for free from http://www.sqlsentry.net/plan-explorer/sql-server-query-view.asp. So, just a little setup is required before I can show you the tip in full. Create a directory on your Desktop called Execution...(read more)

    Read the article

  • Why lock statements don't scale

    - by Alex.Davies
    We are going to have to stop using lock statements one day. Just like we had to stop using goto statements. The problem is similar, they're pretty easy to follow in small programs, but code with locks isn't composable. That means that small pieces of program that work in isolation can't necessarily be put together and work together. Of course actors scale fine :) Why lock statements don't scale as software gets bigger Deadlocks. You have a program with lots of threads picking up lots of locks. You already know that if two of your threads both try to pick up a lock that the other already has, they will deadlock. Your program will come to a grinding halt, and there will be fire and brimstone. "Easy!" you say, "Just make sure all the threads pick up the locks in the same order." Yes, that works. But you've broken composability. Now, to add a new lock to your code, you have to consider all the other locks already in your code and check that they are taken in the right order. Algorithm buffs will have noticed this approach means it takes quadratic time to write a program. That's bad. Why lock statements don't scale as hardware gets bigger Memory bus contention There's another headache, one that most programmers don't usually need to think about, but is going to bite us in a big way in a few years. Locking needs exclusive use of the entire system's memory bus while taking out the lock. That's not too bad for a single or dual-core system, but already for quad-core systems it's a pretty large overhead. Have a look at this blog about the .NET 4 ThreadPool for some numbers and a weird analogy (see the author's comment). Not too bad yet, but I'm scared my 1000 core machine of the future is going to go slower than my machine today! I don't know the answer to this problem yet. Maybe some kind of per-core work queue system with hierarchical work stealing. Definitely hardware support. But what I do know is that using locks specifically prevents any solution to this. We should be abstracting our code away from the details of locks as soon as possible, so we can swap in whatever solution arrives when it does. NAct uses locks at the moment. But my advice is that you code using actors (which do scale well as software gets bigger). And when there's a better way of implementing actors that'll scale well as hardware gets bigger, only NAct needs to work out how to use it, and your program will go fast on it's own.

    Read the article

  • Implementing User-Defined Hierarchies in SQL Server Analysis Services

    To be able to drill into multidimensional cube data at several levels, you must implement all of the hierarchies on the database dimensions. Then you'll create the attribute relationships necessary to optimize performance. Analysis Services hierarchies offer plenty of possibilities for displaying the data that your business requires. Rob Sheldon continues his series on SQL Server Analysis Services 2008.

    Read the article

  • Importing PSTs with PowerShell in Exchange 2010 SP1

    Unless you use Red Gate's PST Importer, the import and export of PST files with Exchange 2010 is a complex and error-prone business. Microsoft have acknowledged this in the release of Exchange 2010 SP1, since they have now re-engineered the way that PSTs are handled to try and ease the pain of importing and exporting them, but it is still a matter of using Powershell with cmdlets, rather than a GUI. Jaap Wesselius takes a look at the new process.

    Read the article

  • An Introduction to Information Rights Management in Exchange 2010

    If you’re a Systems Administrator concerned about information security, you could do worse than implementing Microsoft’s Information Rights Management system; especially if you already have Active Directory Rights Management Services in place. Elie Bou Issa talks Hub Servers, Transport Protection Rules and Outlook integration in this excellent guide to getting started with IRM.

    Read the article

  • SQLBeat Podcast – Episode 7 – Niko Neugebauer, Linguist, SQL MVP and Hekaton Lover

    - by SQLBeat
    In this episode of the SQLBeat Podcast I steal Niko Neugebaur away from his guarded post at the PASS Community Zone at Summit 2012 in Seattle to chat with me about several intriguing topics. Mainly we discuss Hekaton and in memory databases, languages of all sorts, Microsoft’s direction, Reporting Services and Java. Or was that Java Script? Probably best that I stick with what I know and that is SQL Server. Niko, as always, is thoughtful and straightforward, congenial and honest. I like that about him and I know you will too. Enjoy! Download the MP3

    Read the article

  • Understanding Performance Profiling Targets

    In this sample chapter from his upcoming book, Paul Glavich explains performance metrics and walks us through the steps needed to establish meaningful performance targets. He covers many metrics such as "time to first byte" and explains why you should add some contingency into your estimated performance requirements.

    Read the article

  • What's the most important trait of a programmer desired by peer programmers [closed]

    - by greengit
    Questions here talk about most important programming skills someone is supposed to possess, and, a lot of great answers spring up. The answers and the questioners seem mostly interested on qualities related to getting a better job, nailing some interview, things desired by boss or management, or improving your programming abilities. One thing that often gets blown over is the genuine consideration for what traits peers want. They don't much value things like you're one of those 'gets things done' people for the company, or you never miss a deadline, or your code is least painful to review and debug, or you're team player with fantastic leading abilities. They probably care more if you're helpful, are a refreshing person to talk to, you're fun to program in pair with, everybody wants you to review their code, or something of that nature. I, for one, would care that I come off to my peers as a pleasurable programmer to work with, as much as I care about my impression on my boss or management. Is this really significant, and if yes, what are the most desirable traits peers want from each other.

    Read the article

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • Optimizing Memory Usage in a .NET Application with ANTS Memory Profiler

    Most people have encountered an OutOfMemory problem at some point or other, and these people know that tracking down the source of the problem is often a time-consuming and frustrating task. Florian Standhartinger gives us a walkthrough of how he used the ANTS Memory Profiler to help make an otherwise painful task that little bit less troublesome.

    Read the article

  • Regional Office Virtualization - Less Can Be More

    The dream of an 'office in a box' has been around for years, but the increasing sophistication of virtualization software has turned the dream into reality. Ben Lye explains the problems and benefits of reducing the amount of physical hardware that were deployed in his organisation's regional offices.

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • Inside Red Gate - Experimenting In Public

    - by Simon Cooper
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem!

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >