Search Results

Search found 1276 results on 52 pages for 'robert davis'.

Page 2/52 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Troubleshooting Your Network with Oracle Linux

    - by rickramsey
    Are you afraid of network problems? I was. Whenever somebody said "it's probably the network," I went to lunch. And hoped that it was fixed by the time I got back. Turns out it wasn't that hard to do a little basic troubleshooting Tech Article: Troubleshooting Your Network with Oracle Linux by Robert Chase You're no doubt already familiar with ping. Even I knew how to use ping. Turns out there's another command that can show you not just whether a system can respond over the network, but the path the packets to that system take. Our blogging platform won't allow me to write the name down, but I can tell you that if you replace the x in this word with an e, you'll have the right command: tracxroute Once you get used to those, you can venture into the realms of mtr, nmap, and netcap. Robert Chase explains how each one can help you troubleshoot the network, and provides examples for how to use them. Robert is not only a solid writer, he is also a brilliant motorcyclist and rides an MV Augusta F4 750. About the Photograph Photo of flowers in San Simeon, California, taken by Rick Ramsey on a ride home from the Sun Reunion in May 2014. - Rick Follow me on: Personal Blog | Personal Twitter   Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Need to Know

    - by Tony Davis
    Sometimes, I wonder whether writers of documentation, tutorials and articles stop to ask themselves one very important question: Does the reader really need to know this? I recently took on the task of writing a concise series of articles about the transaction log, what is it, how it works and why it's important. It was an enjoyable task; rather like peering inside a giant, complex clock mechanism. Initially, one sees only the basic components, which work to guarantee the integrity of database transactions, and preserve these transactions so that data can be restored to a previous point in time. On closer inspection, one notices all of small, arcane mechanisms that are necessary to make this happen; LSNs, virtual log files, log chains, database checkpoints, and so on. It was engrossing, escapist, stuff; what I'd written looked weighty and steeped in mysterious significance. Suddenly, however, I jolted myself back to reality with the awful thought "does anyone really need to know all this?" The driver of a car needs only to be dimly aware of what goes on under the hood, however exciting the mechanism is to the engineer. Similarly, while everyone who uses SQL Server ought to be aware of the transaction log, its role in guaranteeing the ACID properties, and how to control its growth, the intricate mechanisms ticking away under its clock face are a world away from the daily work of the harassed developer. The DBA needs to know more, such as the correct rituals for ensuring optimal performance and data integrity, setting the appropriate growth characteristics, backup routines, restore procedures, and so on. However, even then, the average DBA only needs to understand enough about the arcane processes to spot problems and react appropriately, or to know how to Google for the best way of dealing with it. The art of technical writing is tied up in intimate knowledge of your audience and what they need to know at any point. It means serving up just enough at each point to help the reader in a practical way, but not to overcook it, or stuff the reader with information that does them no good. When I think of the books and articles that have helped me the most, they have been full of brief, practical, and well-informed guidance, based on experience. This seems far-removed from the 900-page "beginner's guides" that one now sees everywhere. The more I write and edit, the more I become convinced that the real art of technical communication lies in knowing what to leave out. In what areas do the SQL Server technical materials suffer from "information overload"? Where else does it seem that concise, practical advice is drowned out by endless discussion of the "clock mechanisms"? Cheers, Tony.

    Read the article

  • So it comes to PASS…

    - by Tony Davis
    How does your company gauge the benefit of attending a technical conference? What's the best change you made as a direct result of attendance? It's time again for the PASS Summit and I, like most people go with a set of general goals for enhancing technical knowledge; to learn more about PowerShell, to drill into SQL Server performance tuning techniques, and so on. Most will write up a brief report on the event for the rest of the team. Ideally, however, it will go a bit further than that; each conference should result in a specific improvement to one of your systems, or in the way you do your job. As co-editor of Simple-talk.com, and responsible for the majority of our SQL books, my “high level” goals don't vary much from conference to conference. I'm always on the lookout for good new authors. I target interesting new technologies and tools and try to learn more. I return with a list of actions, new articles to commission, and potential new authors. Three years ago, however, I started setting myself the goal of implementing “one new thing” after each conference. After one, I adopted Kanban for managing my workload, a technique that places strict limits on “work in progress” and makes the overall workload, and backlog, highly visible. After another I trialled a community book project. At PASS 2010, one of my general goals was to delve deeper into SQL Server transaction log mechanics, but on top of that, I set a specific goal of writing something useful on the topic. I started a Stairway series and, ultimately, it's turned into a book! If you're attending the PASS Summit this year, take some time to consider what specific improvement or change you'll implement as a result. Also, try to drop by the Red Gate booth (#101). During the Vendor event on Wednesday evening, Gail Shaw and I will be there to discuss, and hand out copies of the book. Cheers, Tony.  

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • On Writing Blogs

    - by Tony Davis
    Why are so many blogs about IT so difficult to read? Over at SQLServerCentral.com, we do a special subscription-only newsletter called Database Weekly. Every other week, it is my turn to look through all the blogs, news and events that might be of relevance to people working with databases. We provide the title, with the link, and a short abstract of what you can expect to read. It is a popular service with close to a million subscribers. You might think that this is a happy and fascinating task. Sometimes, yes. If a blog comes to the point quickly, and says something both interesting and original, then it has our immediate attention. If it backs up what it says with supporting material, then it is more-or-less home and dry, featured in DBW's list. If it also takes trouble over the formatting and presentation, maybe with an illustration or two and any code well-formatted, then we are agog with joy and it is marked as a must-visit destination in our blog roll. More often, however, a task that should be fun becomes a routine chore, and the effort of trawling so many badly-written blogs is enough to make any conscientious Health & Safety officer whistle through their teeth at the risk to the editor's spiritual and psychological well-being. And yet, frustratingly, most blogs could be improved very easily. There is, I believe, a simple formula for a successful blog. First, choose a single topic that is reasonably fresh and interesting. Second, get to the point quickly; explain in the first paragraph exactly what the blog is about, and then stay on topic. In writing the first paragraph, you must picture yourself as a pilot, hearing the smooth roar of the engines as your plane gracefully takes air. Too often, however, the accompanying sound is that of the engine stuttering before the plane veers off the runway into a field, and a wheel falls off. The author meanders around the topic without getting to the point, and takes frequent off-radar diversions to talk about themselves, or the weather, or which friends have recently tagged them. This might work if you're J.D Salinger, or James Joyce, but it doesn't help a technical blog. Sometimes, the writing is so convoluted that we are entirely defeated in our quest to shoehorn its meaning into a simple summary sentence. Finally, write simply, in plain English, and in a conversational way such that you can read it out loud, and sound natural. That's it! If you could also avoid any references to The Matrix then this is a bonus but is purely personal preference. Cheers, Tony.

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • A Plea for Plain English

    - by Tony Davis
    The English language has, within a lifetime, emerged as the ubiquitous 'international language' of scientific, political and technical communication. On the one hand, learning a single, common language, International English, has made it much easier to participate in and adopt new technologies; on the other hand it must be exasperating to have to use English at international conferences, or on community sites, when your own language has a long tradition of scientific and technical usage. It is also hard to master the subtleties of using a foreign language to explain advanced ideas. This requires English speakers to be more considerate in their writing. Even if you’re used to speaking English, you may be brought up short by this sort of verbiage… "Business Intelligence delivering actionable insights is becoming more critical in the enterprise, and these insights require large data volumes for trending and forecasting" It takes some imagination to appreciate the added hassle in working out what it means, when English is a language you only use at work. Try, just to get a vague feel for it, using Google Translate to translate it from English to Chinese and back again. "Providing actionable business intelligence point of view is becoming more and more and more business critical, and requires that these insights and projected trends in large amounts of data" Not easy eh? If you normally use a different language, you will need to pause for thought before finally working out that it really means … "Every Business Intelligence solution must be able to help companies to make decisions. In order to detect current trends, and accurately predict future ones, we need to analyze large volumes of data" Surely, it is simple politeness for English speakers to stop peppering their writing with a twisted vocabulary that renders it inaccessible to everyone else. It isn’t just the problem of writers who use long words to give added dignity to their prose. It is the use of Colloquial English. This changes and evolves at a dizzying rate, adding new terms and idioms almost daily; it is almost a new and separate language. By contrast, ‘International English', is gradually evolving separately, at its own, more sedate, pace. As such, all native English speakers need to make an effort to learn, and use it, switching from casual colloquial patter into a simpler form of communication that can be widely understood by different cultures, even if it gives you less credibility on the street. Simple-Talk is based, at least in part, on the idea that technical articles can be written simply and clearly in a form of English that can be easily understood internationally, and that they can be written, with a little editorial help, by anyone, and read by anyone, regardless of their native language. Cheers, Tony.

    Read the article

  • It’s the thought that counts…

    - by Tony Davis
    I recently finished editing a book called Tribal SQL, and it was a fantastic experience. It’s a community-sourced book written by first-timers. Fifteen previously unpublished authors contributed one chapter each, with the seemingly simple remit to write about “what makes them passionate about working with SQL Server, something that all SQL Server DBAs and developers really need to know”. Sure, some of the writing skills were a bit rusty as one would expect from busy people, but the ideas and energy were sheer nectar. Any seasoned editor can deal easily with the problem of fixing the output of untrained writers. We can handle with the occasional technical error too, which is why we have technical reviewers. The editor’s real job is to hone the clarity and flow of ideas, making the author’s knowledge and experience accessible to as many others as possible. What the writer needs to bring, on the other hand, is enthusiasm, attention to detail, common sense, and a sense of the person behind the writing. If any of these are missing, no editor can fix it. We can see these essential characteristics in many of the more seasoned and widely-published writers about SQL. To illustrate what I mean by enthusiasm, or passion, take a look at the work of Laerte Junior or Fabiano Amorim. Both authors have English as a second language, but their energy, enthusiasm, sheer immersion in a technology and thirst to know more, drives them, with a little editorial help, to produce articles of far more practical value than one can find in the “manuals”. There’s the attention to detail of the likes of Jonathan Kehayias, or Paul Randal. Read their work and one begins to understand the knowledge coupled with incredible rigor, the willingness to bend and test every piece of advice offered to make sure it’s correct, that marks out the very best technical writing. There’s the common sense of someone like Louis Davidson. All writers, including Louis, like to stretch the grey matter of their readers, but some of the most valuable writing is that which takes a complicated idea, or distils years of experience, and expresses it in a way that sounds like simple common sense. There’s personality and humor. Contrary to what you may have been told, they can and do mix well with technical writing, as long as they don’t become a distraction. Read someone like Rodney Landrum, or Phil Factor, for numerous examples of articles that teach hard technical lessons but also make you smile at least twice along the way. Writing well is not easy and it takes a certain bravery to expose your ideas and knowledge for dissection by others, but it doesn’t mean that writing should be the preserve only of those trained in the art, or best left to the MVPs. I believe that Tribal SQL is testament to the fact that if you have passion for what you do, and really know your topic then, with a little editorial help, you can write, and people will learn from what you have to say. You can read a sample chapter, by Mark Rasmussen, in this issue of Simple-Talk and I hope you’ll consider checking out the book (if you needed any further encouragement, it’s also for a good cause, Computers4Africa). Cheers, Tony  

    Read the article

  • Get the onended event for an AudioBuffer in HTML5/Chrome

    - by Matthew James Davis
    So I am playing audio file in Chrome and I want to detect when playing has ended so I can delete references to it. Here is my code var source = context.createBufferSource(); source.buffer = sound.buffer; source.loop = sound.loop; source.onended = function() { delete playingSounds[soundName]; } source.connect(mainNode); source.start(0, sound.start, sound.length); however, the event handler doesn't fire. Is this not yet supported as described by the W3 specification? Or am I doing something wrong?

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Music before bells and whistles

    - by Tony Davis
    Why is it that Windows has so much difficulty in finding content on its file system? This is not an insurmountable technical problem; on my laptop, I have a database within which I can instantly find text or names within millions of records, within 300 milliseconds. I have a copy of Google Desktop that can find phrases within emails or documents, almost as quickly. It is an important, though mundane, part of an operating system to be able to find files. The first thing I notice within Windows is that the facility to find files or text within files is called 'search' rather than 'find'. Hmm. This doesn’t bode well. What’s this? It does a brute-force search for file names? Here we are in an age when we can breed mice that glow in the dark, and manufacture computers that fit in our shirt pockets, and we find an operating system that is still entirely innocent of managing and indexing content in hierarchical data. I can actually read the files of my PC into a database, mimic the directory/folder hierarchies and then find files in a flash; but when I do the same with Windows Vista, we are suddenly back in a 1960s time warp. Finding files based on their name is bad enough, but finding files based on the content that they contain is more or less asking for an opportunity to wait 20 minutes in order to see a "file not found" message. Sadly, with Windows 7, Microsoft seems to have fallen into the familiar trap of adding bells and whistles before finishing the song. It's certainly true that Microsoft has added new features and a certain polish to Windows Search 4.0, the latest incarnation. It works more like a web search and offers a new search syntax, called Advanced Query Syntax, which allows you to search on file author, file size, date ranges (e.g. date:=7/4/09still does not work reliably. I've experienced first-hand its stubborn refusal, despite a full index, to acknowledge the existence of a file I know exists, based on a search for a specific term within that file that I know is in there somewhere; a file that Google Desktop search, or old wingrep, finds in seconds. When users hark back to the halcyon days of Windows XP search, you know something is seriously amiss. Shouldn't applications get the functionality right before applying animated menus and Teletubby graphics, or is advancing age making me grumpy? I’d be pleased to hear your views, as always. Cheers, Tony.

    Read the article

  • Going Metro

    - by Tony Davis
    When it was announced, I confess was somewhat surprised by the striking new "Metro" User Interface for Windows 8, based on Swiss typography, Bauhaus design, tiles, touches and gestures, and the new Windows Runtime (WinRT) API on which Metro apps were to be built. It all seemed to have come out of nowhere, like field mushrooms in the night and seemed quite out-of-character for a company like Microsoft, which has hung on determinedly for over twenty years to its quaint Windowing system. Many were initially puzzled by the lack of support for plug-ins in the "Metro" version of IE10, which ships with Win8, and the apparent demise of Silverlight, Microsoft's previous 'radical new framework'. Win8 signals the end of the road for Silverlight apps in the browser, but then its importance here has been waning for some time, anyway, now that HTML5 has usurped its most compelling use case, streaming video. As Shawn Wildermuth and others have noted, if you're doing enterprise, desktop development with Silverlight then nothing much changes immediately, though it seems clear that ultimately Silverlight will die off in favor of a single WPF/XAML framework that supports those technologies that were pioneered on the phones and tablets. There is a mystery here. Is Silverlight dead, or merely repurposed? The more you look at Metro, the more it seems to resemble Silverlight. A lot of the philosophies underpinning Silverlight applications, such as the fundamentally asynchronous nature of the design, have moved wholesale into Metro, along with most the Microsoft Silverlight dev team. As Simon Cooper points out, "Silverlight developers, already used to all the principles of sandboxing and separation, will have a much easier time writing Metro apps than desktop developers". Metro certainly has given the framework formerly known as Silverlight a new purpose. It has enabled Microsoft to bestow on Windows 8 a new "duality", as both a traditional desktop OS supporting 'legacy' Windows applications, and an OS that supports a new breed of application that can share functionality such as search, that understands, and can react to, the full range of gestures and screen-sizes, and has location-awareness. It's clear that Win8 is developed in the knowledge that the 'desktop computer' will soon be a very large, tilted, touch-screen monitor. Windows owes its new-found versatility to the lessons learned from Windows Phone, but it's developed for the big screen, and with full support for familiar .NET desktop apps as well as the new Metro apps. But the old mouse-driven Windows applications will soon look very passé, just as MSDOS character-mode applications did in the nineties. Cheers, Tony.

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • IT Admin for Thrill Seekers

    - by Tony Davis
    A developer suggested to me recently that the life of the DBA was, surely, a dull one. My first reaction was indignation, but quickly followed by the thought that for many people excitement isn't necessarily the most desirable aspect of their job. It's true that some aspects of the DBA role seem guaranteed to quieten the pulse; in the days of tape backups, time must have slowed to eternity for the person whose job it was to oversee this process, placing tapes into secure containers, ensuring correct labeling, and.sorry, I drifted off there for a second. On the other hand, if you follow the adventures of the likes of Brent Ozar or Tom LaRock, you'd be forgiven for thinking that much of a database guy's time is spent, metaphorically, diving through plate glass windows in tight fitting underwear in order to extract grateful occupants from burning database applications. Alas it isn't true of the majority, but it isn't as dull as some people imagine, and is a helter-skelter ride compared with some other IT roles. Every IT department has people who toil away in shadowy corners doing quiet but mysterious tasks. When you ask them to explain what they do, you almost immediately want them to stop, but you hear enough to appreciate that these tasks are often absolutely vital to the smooth functioning of an IT organization. Compared with them, the DBAs are prima donnas. Here are a few nominations: Installation engineer - install all of the company's laptops and workstations, and software, deal with licensing, shipping and data entry.many organizations, especially those subject to tight regulation, would simply grind to a halt without their efforts. Localization engineer - Not quite software engineering, not quite translation, the job is to rebuild a product in a different language and make sure everything still works. QA Tester - firstly, I should say that the testers at Red Gate seem to me some of the most-fulfilled in the company. I refer here to the QA Tester whose job is more-or-less entirely to read a script, click some buttons and make sure the actual and expected values match. Configuration manager - for example, someone whose main job is to configure build environments so that devs can access their source code; assuredly necessary for the smooth functioning and productivity of the team, and hopefully well-paid. So what other sort of job in IT should one choose if the work of a DBA proves to be too exciting? Or are these roles secretly more exciting than many imagine? I invite you all to put forward your own suggestions. Cheers, Tony.

    Read the article

  • Copy wrongs and Copyright

    - by Tony Davis
    Recently, a Chinese blog website copied, wholesale and without permission, a Simple-Talk article on troubleshooting locking and blocking. Our initial reaction was exasperation and anger, tempered slightly by the fact that there was, at the top, a clear link to the original, and the book from which it was extracted. On the day the copy was posted, our original article saw a 30K spike in visits, so the site clearly has a substantial following! This made us pause for thought. Indeed, we wondered whether it might not be more profitable, and certainly more enjoyable, to notify the offender of similar content and serve a "put up" notice, rather than the usual DMCA "take down" . The DMCA request, issued to protect our and our authors' assets, is a necessary but tiresome, chore. So often, simple communication and negotiation could have averted the need for it. We are, after all, in the business of presenting knowledge, information and help to the SQL Server Community. If only they had asked! Of course, one's attitude changes according to the motivation behind the copying of content. One of the motivations seems to be pure vanity; they do it to try to enhance their CV, or their company's expertise, by pretending to expertise they don't possess. There is a class of plagiariser, however, that is doing it purely for money, getting advertising revenue by attracting hapless readers to their site. Not content with stealing content, sites can invest in services that provide 'load-testing' for websites that is so realistic that even the search engines can be fooled. Stolen content, fake visitors, swindled advertisers. Zero-tolerance is really the only way of dealing with plagiarism, and action will only be completely effective once Bing, Google, and the other search engines strike out from their listings the rogue sites that refuse to take down plagiarised content. It is, after all in everyone else's interests. Cheers, Tony.

    Read the article

  • Black Screen on boot on a Lenovo IdeaPad Z575

    - by Davis
    So I tried to install Wubi. On boot, when I select Ubuntu it starts then I get a purple screen then black screen. My monitor is like completely off. So then I have to hold down my power button and shut it off. I boot it up then held shift and typed in "nomodeset" after the second to last line, but then when it boot it went into the command prompt thing and just stopped after "checking battery" or something. This is my first time installing any linux distro. I want to install it alongside windows and dual boot it with wubi because that is easy and simple. This is the laptop I have: http://www.newegg.com/Product/Product.aspx?Item=N82E16834246328

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • Oracle Linux / Symantec Partnership

    - by Ted Davis
    Fred Astaire and Ginger Rogers sang the now famous lyrics:  “You like to-may-toes and I like to-mah-toes”. In the tech world, is it Semantic or is it Symantec? Ah, well, we know it’s the latter. Actually, who doesn’t know or hasn’t heard of Symantec in the tech world? Symantec is thoroughly engrained in Enterprise customer infrastructure from their Storage Foundation Suite to their Anti-Virus products. It would be hard to find anyone who doesn’t use their software. Likewise, Oracle Linux is thoroughly engrained in Enterprise infrastructure – so our paths cross quite a bit. This is why the Oracle Linux  engineering team works with Symantec to make sure their applications and agents are supported on Oracle Linux. We also want to make sure the Oracle Linux / Symantec customer experience is trouble free so customer work continues at the same blistering pace. Here are a few Symantec applications that are supported on Oracle Linux: Storage Foundation Netbackup Enterprise Server Symantec Antivirus For Linux Veritas Cluster Server Backup Exec Agent for Linux So, while Fred and Ginger may disagree on how to spell tomato, for our software customers, the Oracle / Symantec partnership works together so our joint customers experience and hear the sweet song of success.

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >