Search Results

Search found 172 results on 7 pages for 'buckwoody'.

Page 6/7 | < Previous Page | 2 3 4 5 6 7  | Next Page >

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is the Internet Making us Smarter or Not?

    - by BuckWoody
    I’ve been reading recently about an exchange among some very bright folks, some who posit that the Internet with its instant-on, sometimes-right, big-statement-wins mentality is making people think in a more shallow way, teaching us to rely on others as experts and diluting our logical thought process. Others state that it broadens our perspective and extends our mental reach. Whenever I see this kind of exchange on two ends of a spectrum, I begin to wonder if both sides might be correct.   I can certainly say that I have changed my way of learning, reading, and social interactions because of the Internet. And my tolerance for reading long missives has indeed gone down. I tend to (mentally and literally) “bookmark” things I never seem to have time to get back to. But I also agree that I’ve been exposed to thoughts, ideas and people I never would have encountered any other way. So how to deal with this dichotomy?   Well, I’m going to go off and think about it. No, I’m really going to go off for a full week to a cabin I’ve rented in a National Forest in the Midwest. It has no indoor plumbing, phones, Internet connections or anything else – only a bed to sleep in and a place to cook a little. I’m taking one book, some paper, and a guitar with me and that’s it. I plan to spend my days walking, reading a little, playing a little on the guitar, but mostly just thinking. Those of you who know me might find this unusual. I’m an always-on, hyper-caffeinated, overly-busy, connected person. I haven’t taken a vacation in five years, at least for more than two or three days at a time. Even then, I keep us on the move constantly – our vacations aren’t cruises or anything like that. I check e-mail, post and all that. When I’m not on vacation, I live with and leverage lots of technology, and work with those that do the same. This, however, is a really “unplugged” event, and I’m hoping that it will let me unpack the things I’ve been stuffing in my head. I plan to spend a lot of time on a single subject, writing notes, thinking, and writing more notes.   So after I post tomorrow's “quote of the day” I’ll be “going dark” for a week. No twitter, FaceBook, LinkedIn, e-mail, chat, none of my five blogs will get updated, and I’ll have to turn in my two articles for InformIT.com early. I won’t have access to my college class portal, so my students will be without me for a week. I will really be offline. I’ll see you in a week – hopefully a little more educated. See you then.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Shame on you for stealing

    - by BuckWoody
    It's become quite common for people with no morals to steal from others. But that doesn't make it right. If you're reading this on SQLBLOGS.NET, then it's stolen from my "real" blog location. Send an e-mail right now to [email protected] and let this person know that taking something without asking is stealing - and he or she should be ashamed of themselves. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Data Movement and the Decision Matrix

    - by BuckWoody
    Maybe it’s my military background, or maybe I’ve always had this predilection, but I like to use two devices when I need to make a complex decision: A checklist and a decision matrix. I like to use a checklist because it ensures that I remember the big bits of what I need to do, and brings up questions or areas that I didn’t think about when evaluating options for the decision. And the decision matrix – that’s the thing I use to actually lay out those options. It’s simply a spreadsheet-like grid (I use Excel, but paper and pencil works as well) that lays out the requirements or advantages for the decision across the top, and the options I have on the left-hand side. Then in the “cells” I put whether or not that option on the left will meet the requirement in that column. I then simply “weight” each cell to organize the choices by best-fit. The right answer (or answers) will float right to the top. I was asked yesterday about options for moving data in SQL Server to another system. There are just dozens of ways to do this, from bcp to Replication, each with certain advantages and costs. But asking the questions for the top row first helped me show the person that it isn’t a particular technology that is important, it’s laying out those requirements and thinking about which elements are more important than the other. For instance, is it more important to have the data moved all the time, or is it OK if that happens once in a while? Does the data have to move in two directions or just one? All of these will help that answer jump right out. Try it sometime – it’s a great learning exercise, since it will force you to focus on filling out the matrix. The answer is out there, Neo. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • SQL Saturday 27 (Portland, Oregon)

    - by BuckWoody
    I’m sitting in the Seattle airport, waiting for my flight to Silicon Valley California for the SQL Server 2008 R2 Launch Event. By some quirk of nature, they are asking me to Emcee the event – but that’s another post entirely.   I’m reflecting on the SQL Saturday 27 event that was just held in Portland, Oregon this last Saturday. These are not Microsoft-sponsored events – it’s truly the community at work. Think of a big user-group meeting – I mean REALLY big – held in a central location, like at a college (as ours was) or some larger, inexpensive venue like that. Everyone there is volunteering – it’s my own money and time to drive several hours to a hotel for the night, feed myself and present. It’s their own time and money for the folks that organize the event – unless a vendor or two steps in to help. It’s their own time and money for the attendees to drive a long way, spend the night and their Saturday to listen to the speakers. Why do all this?   Because everybody benefits. Every speaker learns something new, meets new people, and reaches a new audience. Every volunteer does the same. And the attendees? Well, it’s pretty obvious what they get. A 7Am to 10PM extravaganza of knowledge from every corner of the product. In fact, this year the Portland group hooked up with the CodeCamp folks and held a combined event. We had over 850 people, and I had everyone from data professionals to developers in my sessions.   So I’ll take this opportunity to do two things: to say “thank you” to all of the folks who attended, from those who spoke to those who worked and those who came to listen, and to challenge you to attend the next SQL Saturday anywhere near you. You can find the list here: http://www.sqlsaturday.com/. Don’t see anything in your area? Start one! The PASS folks have a package that will show you how. Sure, it’s a big job, but the key is to get as many people helping you as possible. Even if you have only a few dozen folks show up the first time, no worries. The first events I presented at had about 20 in the room. But not this week.   See you at the Launch Event if you’re near the San Francisco area tomorrow, and see you at the Redmond SQL Saturday and TechEd if not.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Help Me Help You Fix That

    - by BuckWoody
    If you've been redirected here because you posted on a forum, or asked a question in an e-mail, the person wanted you to know how to get help quickly from a group of folks who are willing to do so - but whose time is valuable. You need to put a little effort into the question first to get others to assist. This is how to do that. It will only take you a moment to read... 1. State the problem succinctly in the title When an e-mail thread starts, or a forum post is the "head" of the conversation, you'll attract more helpers by using a descriptive headline than a vague one. This: "Driver for Epson Line Printer Not Installing on Operating System XYZ" Not this: "Can't print - PLEASE HELP" 2. Explain the Error Completely Make sure you include all pertinent information in the request. More information is better, there's almost no way to add too much data to the discussion. What you were doing, what happened, what you saw, the error message, visuals, screen shots, whatever you can include. This: "I'm getting error '5203 - Driver not compatible with Operating System since about 25 years ago' in a message box on the screen when I tried to run the SETUP.COM file from my older computer. It was a 1995 Compaq Proliant and worked correctly there.." Not this: "I get an error message in a box. It won't install." 3. Explain what you have done to research the problem If the first thing you do is ask a question without doing any research, you're lazy, and no one wants to help you. Using one of the many fine search engines you can most always find the answer to your problem. Sometimes you can't. Do yourself a favor - open a notepad app, and paste the URL's as you look them up. If you get your answer, don't save the note. If you don't get an answer, send the list along with the problem. It will show that you've tried, and also keep people from sending you links that you've already checked. This: "I read the fine manual, and it doesn't mention Operating System XYZ for some reason. Also, I checked the following links, but the instructions there didn't fix the problem: " Not this: <NULL> 4. Say "Please" and "Thank You" Remember, you're asking for help. No one owes you their valuable time. Ask politely, don't pester, endure the people who are rude to you, and when your question is answered, respond back to the thread or e-mail with a thank you to close it out. It helps others that have your same problem know that this is the correct answer. This: "I could really use some help here - if you have any pointers or things to try, I'd appreciate it." Not this: "I really need this done right now - why are there no responses?" This: "Thanks for those responses - that last one did the trick. Turns out I needed a new printer anyway, didn't realize they were so inexpensive now." Not this: <NULL> There are a lot of motivated people that will help you. Help them do that.

    Read the article

  • SQL Saturday 43 (Redmond, WA) Review

    - by BuckWoody
    Last Saturday (June 12th) we held a “SQL Saturday” (more about those here) event in Redmond, Washington. The event was held at the Microsoft campus, at the Mixer in our new location called the “Commons”. This is a mall-like area that we have on campus, and the Mixer is a large building with lots of meeting rooms, so it made a perfect location for the event. There was a sign to find the parking, and once there they had a sign to show how to get to the building. Since it’s a secure facility, Greg Larsen and crew had a person manning the door so that even late arrivals could get in. We had about 400 sign up for the event, and a little over 300 attend (official numbers later). I think we would have had a lot more, but the sun was out – and you just can’t underestimate the effect of that here in the Pacific Northwest. We joke a lot about not seeing the sun much, but when a day like what we had on Saturday comes around, and on a weekend at that, you’d cancel your wedding to go outside to play in the sun. And your spouse would agree with you for doing it. We had some top-notch speakers, including Clifford Dibble and Kalen Delany. The food was great, we had multiple sponsors (including Confio who seems to be at all of these) and the attendees were from all over the professional spectrum, from developers to BI to DBA’s. Everyone I saw was very engaged, and when I visited room-to-room I saw almost no one in the halls – everyone was in the sessions. I also saw a much larger Microsoft presence this year, especially from Dan Jones’ team. I had a great turnout at my session, and yes, I was wearing an Oracle staff shirt. I did that because I wanted to show that the session I gave on “SQL Server for the Oracle DBA” was non-marketing – I couldn’t exactly bash Oracle wearing their colors! These events are amazing. I can’t emphasize enough how much I appreciate the volunteers and how much work they put into these events, and to you for coming. If you’re reading this and you haven’t attended one yet, definitely find out if there is one in your area – and if not, start one. It’s a lot of work, but it’s totally worth it.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What 5 things should SQL Server get rid of?

    - by BuckWoody
    I’ve been “tagged” by my friend Paul Randal. It’s a high-tech way of making someone else do what you want, but since it’s Paul, well, I guess I’m OK with that. He’s asked in his recent blog entry “What five things would you get rid of in SQL Server if you were in charge?” This is, of course, a delicate issue. After all, I work at Microsoft, so anything I say here might be taken as a criticism that would require action – but of course it really doesn’t. Interestingly, you may have more to do with what goes in to SQL Server than I did even as a Program Manager where I “owned” a feature. Unlike many places I’ve worked, Microsoft really does drive its products by what its users want – not every time, and not every user request, mind you, but overall I think we hit the mark pretty well. So, with all of that said, and of course the obligatory statement of “these are my own opinions, and have nothing to do with any official Microsoft position in any way, and do not reflect the opinions of other Microsoft employees or management”, here goes. 1. Get rid of SQL Server Management Studio Does that surprise you? After all, when I was a Program Manager, I actually owned the general architecture for SSMS. But those on my team probably would have been able to guess this one for you. I think that SSMS is a fine development tool. But I think that it does less of a good job for managing a system. It’s based on Visual Studio, probably one of the best development IDE’s around. And when I develop code, I really like it. But for a monitoring/management tool, I prefer a snap-in to the Microsoft Management Console (MMC). I know, the old one (prior to 3.0) was kludgy, difficult to use and program in. But that’s changed. Of course, when I bring this up, you’ll probably immediately say “But I don’t have that in XP.” And that’s one of the reasons we didn’t go there. (But I still don’t like SSMS for management.) 2. ShrinkDB I think this discussion has been done to death, so I’ll leave it at that. 3. SQL Server Agent Does that one surprise you as well? In my mind, since we ALWAYS ride on Windows, just use the task scheduler there, along with PowerShell. You could log the results in Windows logs, files, back into SQL Server, whatever. It’s just a complexity we don’t need in SQL Server. 4. SQL Server Error Logs We have a full logging setup in Windows. They’re well done, easy to understand and ubiquitous. We should just use that. 5. Several SKU’s I won’t say which, but we have a few SKU’s of SQL Server that need to go. And we need to figure out how to help you understand clearly where you need to go to Enterprise or Data Center.  Most folks are trying to push Standard edition to do things it isn’t designed to do, and then they think SQL Server won’t scale. I think we can do a better job of showing you where Standard Edition will hit the wall, and I think with fewer choices it would be pretty simple for you to pick the right one. Well, once again I’ve probably puzzled some folks and angered others. I think my work here is done. :) Back to you, Paul. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • TechEd 2010 Day Four: Learning how to help others learn

    - by BuckWoody
    I do quite a few presentations, and teach at the University of Washington, and also teach other classes. But I'm always learning from others how to help others learn. At events like TechEd I have access to some of the best speakers around, so I try to find out what they do that works. I attended a great session by allen White, in which he demonstrated a set of PowerShell scripts. He said that Dan Jones of the Microsoft Manageability team told him while he demonstrated a script he needed to provide some visual way to represent the process. Allen used one of the oldest visualizations around - a flowchart. It was the first time I'd seen one used to illustrate a PowerShell script, and it was very effective. I'm totally stealing the idea. All of us are teachers - we help others on our team understand what we're up to. Make sure you make notes for what you find effective in dealing with you, and then meld that into your own way of teaching. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • TechEd 2010 Day Four: Learning how to help others learn

    - by BuckWoody
    I do quite a few presentations, and teach at the University of Washington, and also teach other classes. But I'm always learning from others how to help others learn. At events like TechEd I have access to some of the best speakers around, so I try to find out what they do that works. I attended a great session by allen White, in which he demonstrated a set of PowerShell scripts. He said that Dan Jones of the Microsoft Manageability team told him while he demonstrated a script he needed to provide some visual way to represent the process. Allen used one of the oldest visualizations around - a flowchart. It was the first time I'd seen one used to illustrate a PowerShell script, and it was very effective. I'm totally stealing the idea. All of us are teachers - we help others on our team understand what we're up to. Make sure you make notes for what you find effective in dealing with you, and then meld that into your own way of teaching. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What are the most effective learning methods?

    - by BuckWoody
    After I got done speaking at the SQL Server 2008 R2 Launch Event yesterday I came back to the hotel room for a web-meeting with some of the other teachers at the University of Washington. As teachers we are always looking to improve the knowledge transfer to our students – and the Program Director found an interesting study that I thought I might share here. Below is an un-labeled chart showing the effectiveness of learning methods according to a recent study. At the top are the labels. (“Teaching” here means students teaching each other). Try the experiment we did: place the labels where you think they’ll go. I’ll post the completed chart tomorrow. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

< Previous Page | 2 3 4 5 6 7  | Next Page >