Search Results

Search found 31491 results on 1260 pages for 'simple talk'.

Page 129/1260 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Celko's SQL Stumper: Eggs in one Basket

    Joe Celko returns with another stumper to celebrate Easter. Unsurprisingly, this involves eggs. More surprising is the nature of the puzzle: This time, the puzzle is one of designing a database rather than a query. DDL as well as the DML.

    Read the article

  • Web.NET is Closing Fast

    - by Chris Massey
    The voting for sessions has now closed, and sadly only half of the potential sessions could make it through. On the plus side, the sessions that floated to the top look great and, with the votes in, Simone and Ugo have moved right along and created a draft agenda to whet our appetites. Take a look, and let them know what you think. I’d also strongly recommend that you get ready to grab your tickets when they become available next week (specifically, September 18th), as places are going to be snapped up fast. In case you need a reminder as to why Web.NET is worth your time: Complete focus on web development Awesome sessions All-night hackathon Free (although I urge you to make a donation to help Simone and Ugo create the best possible event) Put October 20th in your calendar, and start packing. I’ve already booked my flights, and am perusing the list of hotels while I eat my lunch. Bonus Material There will be a full day of RavenDB training on Monday the 22nd of October, run by Ayende himself, and attending Web.NET will get you a 30% discount on the cost of the session.

    Read the article

  • error with gtkmm 3 in ubuntu 12.04

    - by Grohiik
    i install libgtkmm-3.0-dev in ubuntu 12.04 and i try to learn and write program with c++ and gtkmm 3 i go to this link "http://developer.gnome.org/gtkmm-tutorial/unstable/sec-basics-simple-example.html.en" and try to compile simple example program : #include <gtkmm.h> int main(int argc, char *argv[]) { Glib::RefPtr<Gtk::Application> app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base"); Gtk::ApplicationWindow window; return app->run(window); } my file name is "basic.cc" and i open terminal and type following command to compile: g++ basic.cc -o basic `pkg-config gtkmm-3.0 --cflags --libs` compile completed without any error but when i try to run program with type ./basic in terminal i get following error : ~$ ./simple ./simple: symbol lookup error: ./simple: undefined symbol:_ZN3Gtk11Application6createERiRPPcRKN4Glib7ustringEN3Gio16ApplicationFlagsE ~$ how can i solve this problem ? i can cimpile any gtkmm 2.4 code with this command : " g++ basic.cc -o basic pkg-config gtkmm-3.0 --cflags --libs " and this command : " g++ basic.cc -o basic pkg-config gtkmm-2.4 --cflags --libs " thanks

    Read the article

  • A Bit Cloudy

    - by Chris Massey
    "Systems Administrators, I come in peace. You have nothing to fear from me" - Office 365 Microsoft Business Productivity Online Suite recently absorbed a few other services and has been rebranded as Office 365, which is currently in private Beta and NDA-d up to the eyeballs. As Microsoft's (slightly delayed) answer to Google Apps Premier Edition, it shows a lot of promise; MS has technical expertise, market penetration, and financial capital all going for it. On the other hand, Google...(read more)

    Read the article

  • The .NET 4.5 async/await Commands in Promise and Practice

    The .NET 4.5 async/await feature provides an opportunity for improving the scalability and performance of applications, particularly where tasks are more effectively done in parallel. The question is: do the scalability gains come at a cost of slowing individual methods? In this article Jon Smith investigates this issue by conducting a side-by-side evaluation of the standard synchronous methods and the new async methods in real applications.

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • Glimpse: Open Source Web Development

    - by Elizabeth Ayer
    We’re delighted to announce that Red Gate will be backing Glimpse! For those of you who aren’t familiar with the project, Glimpse is an open source tool which does for the server what Firebug does for the client. It’s been in beta for the last year, and we’re very excited to give Glimpse the support and dedicated effort needed to take it to a v1 and beyond. Glimpse’s founders (Nik Molnar and Anthony van der Hoorn) have joined Red Gate, and they’re just as excited as we are about the opportunities that active development of Glimpse will bring. They will continue to write code, support the community and drive the project forward (as they’ve done since its inception). With full-time attention on growing Glimpse and its community, users and developers can expect the project to accelerate, with frequent releases of new functionality. Red Gate is excited about its first major involvement with open source. You may well be wondering, though, why Red Gate is doing this. Glimpse dovetails beautifully with Red Gate’s .NET tools, which makes Glimpse an ideal framework for plugging in advanced, paid-for functionality (like performance analysis) the way web developers want to see it. As a means to this end, we will contribute to the Glimpse open source project in order to broaden its adoption and delight web developers. Since bringing in .NET Reflector in 2008, we’ve learnt sharp lessons from the community about the right and wrong ways to engage with developers, not to mention the enduring value of free. Glimpse further shows what the .NET community can achieve through open source collaboration, and we’re looking forward to working with the Glimpse community to make something enduring and awesome. Nik and Anthony, themselves passionate advocates of community-driven software, will continue to control the Glimpse project, steering it to best meet the needs of its users and contributors. If you have any questions or queries about Glimpse, or Red Gate’s involvement in the project, please tweet with the #glimpse hashtag, contact us at Red Gate on [email protected], or post to the Glimpse Development Forum on Google Groups.

    Read the article

  • Database IDs

    - by fatherjack
    Just a quick post, mainly to test out the new blog format but related to a question on the #sqlhelp hashtag. The question came from Justin Dearing (@zippy1981) as: So I take it database_id isn’t an ever incrementing value. #sqlhelp When a new database is created it is given the lowest available ID. This either is in a gap in IDs where a database has been dropped or the database ID is incremented by one from the highest current ID if there are no gaps to fill. To see this in action, connect to your sandbox server and try this: USE MASTER GO CREATE DATABASE cherry GO USE cherry GO SELECT DB_ID() GO CREATE DATABASE grape GO USE grape GO SELECT DB_ID() GO CREATE DATABASE melon GO USE melon GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE grape GO CREATE DATABASE kiwi GO USE kiwi GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE cherry DROP DATABASE melon DROP DATABASE kiwi You should get an incrementing series of database IDs as the databases are created until the last one where the new database gets allocated the ID that is missing because one was dropped.

    Read the article

  • Learn Many Languages

    - by Phil Factor
    Around twenty-five years ago, I was trying to solve the problem of recruiting suitable developers for a large business. I visited the local University (it was a Technical College then). My mission was to remind them that we were a large, local employer of technical people and to suggest that, as they were in the business of educating young people for a career in IT, we should work together. I anticipated a harmonious chat where we could suggest to them the idea of mentioning our name to some of their graduates. It didn’t go well. The academic staff displayed a degree of revulsion towards the whole topic of IT in the world of commerce that surprised me; tweed met charcoal-grey, trainers met black shoes. However, their antipathy to commerce was something we could have worked around, since few of their graduates were destined for a career as university lecturers. They asked me what sort of language skills we needed. I tried ducking the invidious task of naming computer languages, since I wanted recruits who were quick to adapt and learn, with a broad understanding of IT, including development methodologies, technologies, and data. However, they pressed the point and I ended up saying that we needed good working knowledge of C and BASIC, though FORTRAN and COBOL were, at the time, still useful. There was a ghastly silence. It was as if I’d recommended the beliefs and practices of the Bogomils of Bulgaria to a gathering of Cardinals. They stared at me severely, like owls, until the head of department broke the silence, informing me in clipped tones that they taught only Modula 2. Now, I wouldn’t blame you if at this point you hurriedly had to look up ‘Modula 2′ on Wikipedia. Based largely on Pascal, it was a specialist language for embedded systems, but I’ve never ever come across it in a commercial business application. Nevertheless, it was an excellent teaching language since it taught modules, scope control, multiprogramming and the advantages of encapsulating a set of related subprograms and data structures. As long as the course also taught how to transfer these skills to other, more useful languages, it was not necessarily a problem. I said as much, but they gleefully retorted that the biggest local employer, a defense contractor specializing in Radar and military technology, used nothing but Modula 2. “Why teach any other programming language when they will be using Modula 2 for all their working lives?” said a complacent lecturer. On hearing this, I made my excuses and left. There could be no meeting of minds. They were providing training in a specific computer language, not an education in IT. Twenty years later, I once more worked nearby and regularly passed the long-deserted ‘brownfield’ site of the erstwhile largest local employer; the end of the cold war had led to lean times for defense contractors. A digger was about to clear the rubble of the long demolished factory along with the accompanying growth of buddleia and thistles, in order to lay the infrastructure for ‘affordable housing’. Modula 2 was a distant memory. Either those employees had short working lives or they’d retrained in other languages. The University, by contrast, was thriving, but I wondered if their erstwhile graduates had ever cursed the narrow specialization of their training in IT, as they struggled with the unexpected variety of their subsequent careers.

    Read the article

  • Antenna Aligner Part 3: Kaspersky

    - by Chris George
    Quick one today. Since starting this project, I've been encountering times where Nomad fails to build my app. It would then take repeated attempts at building to then see a build go through successfully. Rob, who works on Nomad at Red Gate, investigated this and it showed that certain parts of the message required to trigger the 'cloud build' were not getting through to the Nomad app, causing the HTTP connection to stall until timeout. After much scratching heads, it turns out that the Kaspersky Internet Security system I have installed on my laptop at home, was being very aggressive and was causing the problem. Perhaps it's trying to protect me from myself? Anyway, we came up with an interim solution why the Nomad guys investigate with Kaspersky by setting Visual Studio to be a trusted application with the Kaspersky settings and setting it to not scan network traffic. Hey presto! This worked and I have not had a single build problem since (other than losing internet connection, or that embarrassing moment when you blame everyone else then realise you've accidentally switched off your wireless on the laptop).

    Read the article

  • AsyncBridge? Async on .NET 4.0 using VS11

    - by Alex.Davies
    I've just found something quite cool. It's a code snippet that lets you use the real VS 11 C#5 compiler to write code that uses the async and await keywords, but to target .NET 4.0. It was published by Daniel Grunwald (from SharpDevelop).That means I can stop using the Async CTP for VS2010, which is not at all supported anymore, and a pain to install if you have windows updates turned on. Obviously I couldn't ask all my users to install .NET 4.5 beta, but .NET Demon is a VS 2010 extension, so we already have .NET 4.0. At the time of writing, VS11 is in beta still, but hopefully it's stable enough for my team to use!I would have written the code myself, but I had the wrong impression that the C# 5 beta compiler only looked in mscorlib for the helper classes it needs to implement async methods. Turns out you can provide them yourself. You can get the code here: https://gist.github.com/1961087You just add it to your project, and the compiler will apparently pick it up and use it to implement async/await. I'm at my parents' place for Easter without access to a machine with VS 11 to try it out. Let me know whether you get it to work!This reminds me of LINQBridge, which let us use C# 3 LINQ, but only require .NET 2. We should stick up a webpage to explain, with a nice easy dll, put it in nuget, and call it AsyncBridge.If you were really enthusiastic, you could re-implement the skeleton of the Task Parallel Library against .NET 2 to use async/await without even requiring .NET 4. Our usage stats suggest that practically everyone that uses Red Gate tools already has .NET 4 installed though, so I don't think I'll go to the effort.

    Read the article

  • Managing Data Growth in SQL Server

    'Help, my database ate my disk drives!'. Many DBAs spend most of their time dealing with variations of the problem of database processes consuming too much disk space. This happens because of errors such as incorrect configurations for recovery models, data growth for large objects and queries that overtax TempDB resources. Rodney describes, with some feeling, the errors that can lead to this sort of crisis for the working DBA, and their solution.

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Finding bugs is difficult, right?

    - by Laila
    Something I hear developers tell us all the time is that they take pride in being a developer.and that bugs are a dent in that pride. Someone once told me "I know I have found bugs years later, and it's the worst feeling in the world." So how can you avoid that sinking feeling when you find out a bug has been in production months before someone lets you know about it? Besides, let's face it: hearing about a bug often means a world of pain, because it can take hours to track down where the problem is and more hours (if not days) to fix it. And during that time, you're not working on something new, and that, my friends, is really frustrating! So to cheer you up, we've created a Bug Hunt game, where you battle against the clock to spot bugs. We've really enjoyed putting this together and hope you enjoy playing it too. Once you're done with the bug hunt, we explain how easy it can be to find and fix bugs in real life, using a neat mechanism that we call Automated Error Reporting. Play the game now.

    Read the article

  • Calling all developers building ASP.NET applications

    - by Laila Lotfi
    We know that developers building desktop apps have to contend with memory management issues, and we’d like to learn more about the memory challenges ASP.NET developers are facing. To be more specific, we’re carrying out some exploratory research leading into the next phase of development on ANTS Memory Profiler, and our development team would love to speak to developers building ASP.NET applications. You don’t need to have ever used ANTS profiler – this will be a more general conversation about: - your current site architecture, and how you manage the memory requirements of your applications on your back-end servers and web services. - how you currently diagnose memory leaks and where you do this (production server, or during testing phase, or if you normally manage to get them all during the local development). - what specific memory problems you’ve experienced – if any. Of course, we’ll compensate you for your time with a $50 Amazon voucher (or equivalent in other currencies), and our development team’s undying gratitude. If you’d like to participate, please just drop me a line on [email protected].

    Read the article

  • Immersive UX Changing the Face of Retail

    Changing the Face of Retail is an article Ive been thinking about most of the past couple weeks. I think my goal with the article is to one talk about how technology built into the retail environment can be used to build better experiences for customers and 2 to talk about how this kind of evolutionary extension of the retail environment is better for customers AND retailers.I walked into the Microsoft Retail Store or at least one of them, (see one at Mission Vejo or Scottsdale) and its really impressive...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Antenna Aligner part 1: In the beginning.

    - by Chris George
    Picture the scene, it's 9pm, I'm in my caravan (yes I know, I've heard all the jokes!) with my family and I'm trying to tune the tv by moving the aerial, retuning, moving the aerial again, retuning... 45 mins and much cursing later I succeed. Surely there must be an easier way than this? Aha, an app; there must be an app for that? So I search in the AppStore for such an app, but curiously drew a blank. Then the seeds of the idea started to grow. I can code, I work in a software house with lots of very clever people, surely I can make an app that points to the nearest digital tv transmitter! Not having looked into app development before, I investigated how one goes about making an iPhone app and was quickly greeted by a now familiar answer "Buy a mac!". That was not an option for many reasons, mostly wife related! My dreams were starting to fade until one of my colleagues pointed out that within Red Gate, the very company I work for, there was on-going development on a piece of software that would allow me to write an app using Visual Studio on a Windows machine, Nomad! Once I signed up for the beta program I got to work learning the Jquery mobile / Phonegap framework. Within a couple of hours I had written (in Visual Studio), built in the cloud (using Nomad) and published (via TestFlight) my first iPhone app onto my iPhone ! It didn't do much, but it was a step in the right direction. To be continued...

    Read the article

  • Security Issues When Creating Pages in SharePoint

    - by Damon
    I was speaking (or rather IM'ing) with Ben Collins a while back and he came across an interesting problem that I wanted to document for the sake of posterity.  If you have a SharePoint user who has permissions to create a page in a page library, but that user is having security issues trying to actually make a page, then it the security issue may be related to their access rights on the master page gallery.  Users who create pages must have at least restricted read access to the master page gallery for page creation to succeed. That is one of the joys of working in SharePoint. if something doesn't show up there is usually a good but obscure reason for it, but SharePoint certainly won't tell you outright why it is.  All I have to say is that I'm glad he ran into that issue and not me.

    Read the article

  • In search of database delivery practitioners and enthusiasts

    - by Claire Brooking
    We know from speaking with many of you at tradeshows and user groups that database delivery is not a factory production line. During planning, evaluation, quality control, and disaster mitigation, the people having their say at each step means that successful database deployment is a carefully managed course of action. With so many factors involved at every stage, we would love to find a way for our software to help out, by simplifying processes, speeding them up or joining together the people and the steps that make it all happen. We’re hoping our new research group for database delivery (SQL Server and Oracle) will help us understand the views and experiences of those of you out there in the trenches managing database changes. As part of our new group, we’ll be running a variety of research sessions, including surveys and phone interviews, over coming months. If you have opinions to share on Continuous Integration or Continuous Delivery for databases, we’d love to hear from you. Your feedback really will count as the product teams at Red Gate build plans. For some of our more in-depth sessions, we’ll also be offering participants an Amazon voucher as a thank-you for your time. If you’re not yet practising automated database deployment processes, but are contemplating or planning it, please do consider joining our research group too. If you’d like to sign up to the group and find out more, please fill in a quick form online, and we’ll be in touch to let you know about new research opportunities you might be interested in. We look forward to hearing your stories!

    Read the article

  • Antenna Aligner part 2: Finding the right direction

    - by Chris George
    Last time I managed to get "my first app(tm)" built, published and running on my iPhone. This was really cool, a piece of my code running on my very own device. Ok, so I'm easily pleased! The next challenge was actually trying to determine what it was I wanted this app to do, and how to do it. Reverting back to good old paper and pen, I started sketching out designs for the app. I knew I wanted it to get a list of transmitters, then clicking on a transmitter would display a compass type view, with an arrow pointing the right way. I figured there would not be much point in continuing until I know I could do the graphical part of the project, i.e. the rotating compass, so armed with that reasoning (plus the fact I just wanted to get on and code!), I once again dived into visual studio. Using my friend (google) I found some example code for getting the compass data from the phone using the PhoneGap framework. // onSuccess: Get the current heading // function onSuccess(heading) {    alert('Heading: ' + heading); } navigator.compass.getCurrentHeading(onSuccess, onError); Using the ripple mobile emulator this showed that it was successfully getting the compass heading. But it didn't work when uploaded to my phone. It turns out that the examples I had been looking at were for PhoneGap 1.0, and Nomad uses PhoneGap 1.4.1. In 1.4.1, getCurrentHeading provides a compass object to onSuccess, not just a numeric value, so the code now looks like // onSuccess: Get the current magnetic heading // function onSuccess(heading) {    alert('Heading: ' + heading.magneticHeading); }; navigator.compass.getCurrentHeading(onSuccess, onError); So the lesson learnt from this... read the documentation for the version you are actually using! This does, however, lead to compatibility problems with ripple as it only supports 1.0 which is a real pain. I hope that the ripple system is updated sometime soon.

    Read the article

  • DDDSouthWest 4.0 26th May 2012 - Async 20/20 presentation

    - by Liam Westley
    As I wasn’t voted in with my nominated sessions I presented a 20/20 talk on the new async functionality coming with the .Net Framework.  This was based on the PechaKucha presentation format, where you have only 20 slides with only 20 seconds per slide, and it progresses automatically. It was the first I’d attempted, so thanks to the organisers for allowing me to have a go. Although creating the slide deck was definitely easier than a one hour presentation, it was much more stressful giving the talk by the end of the 6m 40s. I’m not going to upload the slide deck (it won’t make much sense) but I did record the audio and used the excellent Camtasia to create a video of the slide deck with that audio which you can watch over here, https://vimeo.com/42957952

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Defining .NET Components with Namespaces

    A .NET software component is a compiled set of classes that provide a programmable interface that is used by consumer applications for a service. As a component is no more than a logical grouping of classes, what then is the best way to define the boundaries of a component within the .NET framework? How should the classes inter-operate? Patrick Smacchia, the lead developer of NDepend, discusses the issues and comes up with a solution.

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query :)

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >