Search Results

Search found 1116 results on 45 pages for 'bill the ape'.

Page 9/45 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Enter comments on queries in TraceTune

    - by Bill Graziano
    I’m trying to make TraceTune (and eventually ClearTrace) work the way I do.  My typical query tuning session goes like this: Run a trace and upload to TraceTune/ClearTrace Tune the slowest queries Goto 1 I might do this two or three times in one day and then not come back to it again for weeks or even months.  This is especially true for those clients that I only visit a few times per month.  In many cases I’ll look at a query, decide I can’t do much with it and move on.  I needed a way to capture that information. TraceTune now lets you enter a comment for a query.  It can be as simple or as complex as you like.  The comment will be shown inline with the execution history of that query. This should let you walk back through your history with a query and decide whether you should spend more time tuning it.

    Read the article

  • Powershell Active Directory Account Attribute to a variable

    - by Bill Garrett
    Sorry for the newbie question. I am using Powershell 3 to get a list of all user accounts. I am trying to generate an output for accounts, either "Enabled" or "Disabled". I am able to get the account status code from active directory using: $rc = $Rech.PropertiesToLoad.Add("userAccountControl"); That will display the correct account status code. When I try to use an if statement on the value, I dont get any result. How do I put this value into a variable to use some logic with it? In the end, my requirements are to have the output to an CSV file that I can send to HR and have them examin it and instead of a code I would like to have it say "Enabled" or "Disabled". Thank you.

    Read the article

  • ClearTrace Shows Execution History

    - by Bill Graziano
    The latest release of ClearTrace (Build 38) now shows the execution history of a particular statement. You’ll need to save the trace files to a trace group instead of just using the default.  That’s as easy as typing something into the trace group name when you upload the trace.  I usually put the server name in this field. Build 38 also re-enables support for statement level events.  If your trace includes RPC:StmtCompleted or SQL:StmtCompleted events those will be processed and save.  In the results tab you can choose to view statement level or batch level events.  Please note that saving statement level events in a trace can generate HUGE trace files very quickly.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Looking for a CDN

    - by Bill
    Most of the CDN's that I've seen require you to upload your content in advance. I'm looking for a CDN that, upon receiving a request for a resource it hasn't seen, will contact my application server. If the application server returns something, it should be sent to the user and then cached in the CDN. If not, it should just return a 404. If the user requests an unexpired item, the CDN should just serve it without bothering my app server. Does anything like this exist? Is there a way to get Cloudfront to work like this?

    Read the article

  • BizTalk&ndash;Mapping repeating EDI segments using a Table Looping functoid

    - by Bill Osuch
    BizTalk’s HIPAA X12 schemas have several repeating date/time segments in them, where the XML winds up looking something like this: <DTM_StatementDate> <DTM01_DateTimeQualifier>232</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120301</DTM02_ClaimDate> </DTM_StatementDate> <DTM_StatementDate> <DTM01_DateTimeQualifier>233</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120302</DTM02_ClaimDate> </DTM_StatementDate> The corresponding EDI segments would look like this: DTM*232*20120301~ DTM*233*20120302~ The DateTimeQualifier element indicates whether it’s the start date or end date – 232 for start, 233 for end. So in this example (an X12 835) we’re saying the statement starts on 3/1/2012 and ends on 3/2/2012. When you’re mapping from some other data format, many times your start and end dates will be within the same node, like this: <StatementDates> <Begin>20120301</Begin> <End>20120302</End> </StatementDates> So how do you map from that and create two repeating segments in your destination map? You could connect both the <Begin> and <End> nodes to a looping functoid, and connect its output to <DTM_StatementDate>, then connect both <Begin> and <End> to <DTM_StatementDate> … this would give you two repeating segments, each with the correct date, but how to add the correct qualifier? The answer is the Table Looping Functoid! To test this, let’s create a simplified schema that just contains the date fields we’re mapping. First, create your input schema: And your output schema: Now create a map that uses these two schemas, and drag a Table Looping functoid onto it. The first input parameter configures the scope (or how many times the records will loop), so drag a link from the StatementDates node over to the functoid. Yes, StatementDates only appears once, so this would make it seem like it would only loop once, but you’ll see in just a minute. The second parameter in the functoid is the number of columns in the output table. We want to fill two fields, so just set this to 2. Now drag the Begin and End nodes over to the functoid. Finally, we want to add the constant values for DateTimeQualifier, so add a value of 232 and another of 233. When all your inputs are configured, it should look like this: Now we’ll configure the output table. Click on the Table Looping Grid, and configure it to look like this: Microsoft’s description of this functoid says “The Table Looping functoid repeats with the looping record it is connected to. Within each iteration, it loops once per row in the table looping grid, producing multiple output loops.” So here we will loop (# of <StatementDates> nodes) * (Rows in the table), or 2 times. Drag two Table Extractor functoids onto the map; these are what are going to pull the data we want out of the table. The first input to each of these will be the output of the TableLooping functoid, and the second input will be the row number to pull from. So the functoid connected to <DTM01_DateTimeQualifier> will look like this: Connect these two functoids to the two nodes we want to populate, and connect another output from the Table Looping functoid to the <DTM_StatementDate> record. You should have a map that looks something like this: Create some sample xml, use it as the TestMap Input Instance, and you should get a result like the XML at the top of this post. Technorati Tags: BizTalk, EDI, Mapping

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • PASS Budget Posted

    - by Bill Graziano
    If you’re a member of PASS you can view our FY2011 budget at http://www.sqlpass.org/AboutPASS/Governance.aspx.  Our detailed budget is 29 pages long and provides an incredibly detailed snapshot of where our money comes from and how we spend it.  I’ve also written a summary highlighting some of the changes from last year.  If you have any questions about the budget you can ask them here or on the PASS site.

    Read the article

  • What should I wear to a job interview with a game development company?

    - by Bill
    Many game development companies are less formal in terms of workplace attire than other types of software development houses. For example, I know that one place at which I will be interviewing soon has a predominant workplace culture of jeans and polos or t-shirts. Should I wear a suit? Shirt and tie? Shirt and sport jacket, with or without tie? I want to show that I'm serious about the job, but that I understand the culture, too.

    Read the article

  • One of my user accounts logs in without desktop environment

    - by Bill Cheatham
    When I log in to my main user account on Ubuntu 11.10 ,the desktop environment (unity bar, clock, volume control, etc.) is not there. All I have is the desktop background with a menu bar across the top which appears to be for nautilus (options like File-New folder). My other accounts log in like normal. I have recently followed these instructions to give my main user account access to an OSX partition, but I think I have logged in successfully since then. I am able to get a terminal by pressing ctrl+alt+t, but when I typed unity the whole thing crashed. Is there anything I can do to fix this? I have a separate administrator account I can use if needed.

    Read the article

  • Samba network sharing NTFS drives and root permissions from local drives

    - by Bill
    I'm able to share my internal 2ndry NTFS drives (sdb1,2 and 3) on the network with Windows computers now but even though Samba read/write is enabled, Windows network computers can only open files "read-only" and can't save files to the samba shared drives/folders. I try to set permissions in Ubuntu via folder and/or file properties even logged in root via Nautilus but all the samba shared folders and files are set as owner = root, accessible and does not allow me to change them to read/write, it just resets to root, accessible, in other words, I can't change permissions. I'm running Ubuntu 11.04 Gnome on an old Dell Dimension 2400. Also, in order to for me to copy or move any files from the Ubuntu drive to the sdb1,2 or 3 drives, I have to gksu nautilus. This consequently prevents me from copying .ISO files to my "Multisys" thumb drive too.

    Read the article

  • How do I fix dragging windows to the adjacent workspace?

    - by Bill O'Dwyer
    I installed CompizConfig-Settings-Manager and I put on all the settings I liked and had in 11.10, including the ability to drag my windows to the adjacent workspace. It's under the Desktop Wall section, on the Edge Flipping tab and I've checked "Edge Flip Move" and "Edge Flip DnD." In 11.10, the movement was smooth between each workspace, and the window would still be "grabbed" in the same place. In 12.04, it's leaving the window behind and the mouse appears to be "grabbing" nothing, but I'm still holding onto the window, and I can still move and place it within the workspace (or indeed the previous workspace as it won't appear in the desired place until I drag the mouse all the way to the edge of the screen). Any way to fix this? I'm running 12.04 beta 2.

    Read the article

  • Help make the next Summit even better

    - by Bill Graziano
    After the Summit we send out a survey to capture feedback.  We ask a consistent set of questions so we get good year over year results.  I’ve watched blog posts and email threads with ideas for a better Summit.  I got to sit with Denny and crew again on Saturday night and talk about what worked and what didn’t.  We’d like to capture those ideas in a way that you can vote on what’s important to you.  Please take a second and visit http://feedback.sqlpass.org/.  You can make suggestions, vote on the ideas already posted and add your own comments.  Help PASS make next year’s Summit “The Best Summit Ever!”

    Read the article

  • Tracking Redirects Leading to your site

    - by Bill
    Is there a way in which I can find out if a user arrived at my site via a redirect? Here's an example: There are two sites, first.com & second.com. Any request to first.com will do a 302 redirect to second.com. When the request at second.com arrives, is there anyway to know it was redirected from first.com? Note that in this example you have no control over first.com. (In fact, it could be something bad, like kiddieporn.com.) Also note, because it is a redirect, it will not be in the HTTP referrer header.

    Read the article

  • Join me to register for the Summit

    - by Bill Graziano
    This year the Summit registration opens at 6PM on Sunday at the Seattle convention center.  Last year we had a dozen people hanging out, watching the twitter feed on the big monitor and catching up.  All we really needed was a bar and we’d have our own little party going. So this year I’m adding a bar.  I’ve arranged for a cash bar and some stand up tables.  I’m buying the first round for the first 40 or so people that come by.  Come by, register and say Hi.  I’d especially like to encourage first-time attendees to stop by.  This is a low key way to meet some people that will be at the conference.

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • PASS: SQLRally Thoughts

    - by Bill Graziano
    The PASS Board recently decided that we wouldn’t put another US-based SQLRally on the calendar until we had a chance to review the program. I wanted to provide some of my thinking around this. Keep in mind that this is the opinion of one Board member. The Board committed to complete two SQLRally events to determine if an event modeled between SQL Saturday and the Summit was viable. We’ve completed the two events and now it’s time to step back and review the program. This is my seventh year on the PASS Board. Over that time people have asked me why PASS does certain things. Many, many times my answer has been “Because that’s the way we did it last year”. And I am tired of giving that answer. We need to take a step back and review the US-based SQLRally before we schedule another one. It would be irresponsible for me as a Board member to commit resources to this without validating that what we’re doing makes sense for the organization and our members. I have no doubt that this was a great event for the attendees. We just need to validate it’s the best use of our resources. Please keep in mind that we haven’t cancelled the event. We’ve just said we need to review it before scheduling another one. My opinion is that some fairly serious changes are needed to the model before we consider it again – IF we do it again. I’ve come to that conclusion after speaking with the Dallas organizers, our HQ team, our Marketing team, other Board members (including one of the Orlando organizers), attendees in Orlando and Dallas and visiting other similar events. I should point out that their views aren’t unanimous on nearly any part of this event -- which is one of the reasons I want to take some time and think about this before continuing. I think it’s helpful to look at the original goals of what we were trying to accomplish. Andy Warren wrote these up in August of 2010. My summary of these goals and some thoughts on each one is below. Many of these thoughts revolve around the growth of SQL Saturdays. In the two years since that document was written these events have grown significantly. The largest SQL Saturdays are now over 500 people which mean they are nearly the same size as our recent SQLRally. Our goals included: Geographic diversity. We wanted an event in an area of the country that was away from any given Summit location. I think that’s still a valid goal. But we also have SQL Saturdays all over the country. What does SQLRally bring to this that SQLSaturday doesn’t? Speaker growth. One of the stated goals was to build a “farm club” for speakers. This gives us a way for speakers to work up to speaking at Summit by speaking in front of larger crowds. What does SQLRally bring to this that the larger SQL Saturdays aren’t providing? Pre-Conference speakers is one obvious answer here. Lower price. On a per-day basis, SQLRally is roughly 1/4th the price of the Summit. We wanted a way for people to experience something Summit-like at a lower price point. The challenge is that we are very budget constrained at that lower price point. International Event Model.  (I need to write more about this but I’m out of time.  I’ll cover it in the next installment.) There are a number of things I really like about SQLRally. I love the smaller conferences. They give me a chance to meet more people than at something the size of Summit. I like the two day format. That gives you two evenings to be at social events with people. Seeing someone a second day is a great way to build a bond with that person. That’s more difficult to do at a SQL Saturday. We also need to talk about the financial aspects of the event. Last year generated a small $17,000 profit on revenues of $200,000. Percentage-wise that’s reasonable but on an absolute basis it’s not a huge amount in our budget. We think this year will lose between $30,000 and $50,000 and take roughly 1,000 hours of HQ time. We don’t have detailed financials back yet but that’s our best guess at this point. Part of that was driven by using a convention center instead of a hotel. Until we get detailed financials back we won’t have the full picture around the financial impact. This event also takes time and mindshare from our Marketing team. This may sound like a small thing but please don’t underestimate it. Our original vision for this was something that would take very little time from our Marketing team and just a few mentions in the Connector. It turned out to need more than that. And all those mentions and emails take up space we could use to talk about other events and other programs. Last I wanted to talk about some of the things I’m thinking about. I don’t think it’s as simple as saying if we just fix “X” it all gets better. Is this that much better of an event than SQL Saturdays? What if we gave a few SQL Saturdays some extra resources? When SQL Saturdays were around 250 people that wasn’t as viable. With some of those events over 500 we need to reconsider this. We need to get back to a hotel venue. That will help with cost and networking. Is this the best use of the 1,000 HQ hours that we invested in the event? Is our price-point correct? I’m leaning toward raising our price closer to Summit on a per-day basis. I think this will let us put on a higher quality event and alleviate much of the budget pressure. Should growing speakers be a focus? Having top-line pre-conference speakers helps market the event. It will also have an impact on pricing and overall profit. We should also ask if it actually does grow speakers. How many of these people will eventually register for Summit? Attend chapters? Is SQLRally a driver into PASS or is it something that chapters, etc. drive people to? Should we have one paid day and one free instead of two paid days? This is a very interesting model that is used by SQLBits in the UK. This gives you the two day aspect as well as offering options for paid and free attendees. I’m very intrigued by this. Should we focus on a topic? Buried in the minutes is a discussion of whether PASS should have a Business Analytics conference separate from Summit. This is an interesting question to consider. Would making SQLRally be focused on a particular topic make it more attractive? Would that even be a SQLRally? Can PASS effectively manage the two events? (FYI - Probably not.) Would it help differentiate it from Summit and SQL Saturday? These are all questions that I think should be asked and answered before we do this event again. And we can’t do that if we don’t take time to have the discussion. I wanted to get this published before I take off for a few days of vacation. When I get back I’d like to write more about why the international events are different and talk about where we go from here.

    Read the article

  • PASS: Budget Status

    - by Bill Graziano
    Our budget situation is a little different this year than in years past.  We were late getting an initial budget approved.  There are a number of different reasons this occurred.  We had different competing priorities and the budget got pushed down the list.  And that’s completely my fault for not making the budget a higher priority and getting it completed on time. That left us with initial budget approval in early August rather than prior to June 30th.  Even after that there were a number of small adjustments that needed to be made.  And one large glaring mistake that needed to be fixed.  We had a typo in the budget that made it through twelve versions of review.  In my defense I can only say that the cell was red so of course it had to be negative!  And that’s one more mistake I can add to my long and growing list of Mistakes I’ll Never Make Again. Last week we passed a revised budget (version 17) with this corrected.  This is the version we’re cleaning up and posting to the web site this week or next.

    Read the article

  • Update to SQL Server Configuration Scripting Utility

    - by Bill Graziano
    Last spring I released a utility to script SQL Server configuration information on CodePlex.  I’ve been making small changes in this application as my needs have changed.  The application is a .NET 2.0 console application.  This utility serves two needs for me.  First it helps with disaster recovery.  All server level objects (logins, jobs, linked servers, audits) are scripted to a single file per object type.  This enables the scripts to be easily run against a DR server.  If these are checked into source control you can view the history of the script and find out what changed and when. The second goal is to capture what changed inside a database.  Objects inside a database (tables, stored procedures, views, etc.) are each scripted to their own file.  This makes it easier to track the changes to an object over time.  This does include permissions and role membership so you can capture security changes.  My assumption is that a database backup is the primary method of disaster recovery for databases so this utility is designed to capture changes to objects.  You can find the full list of changes from the original on the Downloads page on CodePlex.

    Read the article

  • Digital HD Transition

    - by Bill Evjen
    The HD Experience Roughly 53% of the viewing public has HD capable devices in their home 24% think they are watching HD while they have no subscription to any HD content Today’s HD Considerations Choices abound: format resolution – 720p, 1080i/p frame rates compression and wrapping audio compression and delivery metadata packaging, delivery, and usage content delivery protocols Metadata is going to be a part of the overall experience With emerging technologies: Super Hi-Vision (SHV, UHDTV 4320p), 3D HEVC/H.265, WEBM/VP8 HDBaseT, P2PTV Dolby Pulse/HE-AAC Industry standardization Metadata registration, packaging, and delivery standards Improved picture and sound quality is a logical next step but we need to also think about the end to end viewing experience including; 3D video and audio content Mixed-mode viewing to bring interactive and immersive experiences Content Transportability both on-to and off-of the aircraft High Definition Standardization Analog switch off around the world DTV transition completed: 17 countries DTV transition in progress: 45 countries The EU has mandated the end of 2012 as the final date for Analog Switch Off D-Cinema was standardized by SMPTE in 2006 Airlines are installing HD displays today Passengers are bringing their own devices now HD TV on airlines are getting bigger and bigger – bigger than SD was – now up to 23” Gray scale data input for color – 6 to 8 bit Contrast – 400 to 700 Backlit – LED Encryption – can it be the same for HD? PPV in the cabin?

    Read the article

  • TraceTune: Larger Files and History

    - by Bill Graziano
    I updated TraceTune over the weekend.  I increased the trace file upload size to 20MB.  We’ve processed over half a million rows of trace data so far and I’m confident this won’t kill the server. I added average CPU and average disk reads to the screen that lists the SQL statements in a trace file. I only added these two.  I’m pretty sure average writes isn’t that import.  I’m still thinking about average duration.  I’m trying to balance showing you what you need with a clean, simple interface.  Plus I have a way to see the averages that I describe further down. TraceTune now keeps the last 10 files that you’ve uploaded and will give you some basic details about each file. I think the last change I made is the most interesting. For each SQL statement, I show the history of that statement. You’ll see each trace file where this statement was found.  It will list the averages for CPU, reads, writes and duration.  This will quickly show you if you’re improving the performance of that query.  In my screen shot above you can that even though the execution counts are very different the averages are consistent. If you want to see what queries are consuming the most resources on your server give TraceTune a try.

    Read the article

  • Generate MERGE statements from a table

    - by Bill Graziano
    We have a requirement to build a test environment where certain tables get reset from production every night.  These are mainly lookup tables.  I played around with all kinds of fancy solutions and finally settled on a series of MERGE statements.  And being lazy I didn’t want to write them myself.  The stored procedure below will generate a MERGE statement for the table you pass it.  If you have identity values it populates those properly.  You need to have primary keys on the table for the joins to be generated properly.  The only thing hard coded is the source database.  You’ll need to update that for your environment.  We actually used a linked server in our situation. CREATE PROC dba_GenerateMergeStatement (@table NVARCHAR(128) )ASset nocount on; declare @return int;PRINT '-- ' + @table + ' -------------------------------------------------------------'--PRINT 'SET NOCOUNT ON;--'-- Set the identity insert on for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] ON; 'declare @sql varchar(max) = ''declare @list varchar(max) = '';SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + 's.' + [name] +', 'from sys.columnswhere object_id = object_id(@table)-- --------------------------------------------------------------------------------PRINT 'MERGE [dbo].[' + @table + '] AS t'PRINT 'USING (SELECT * FROM [source_database].[dbo].[' + @table + ']) as s'-- Get the join columns ----------------------------------------------------------SET @list = ''select @list = @list + 't.[' + c.COLUMN_NAME + '] = s.[' + c.COLUMN_NAME + '] AND 'from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE cwhere pk.TABLE_NAME = @tableand CONSTRAINT_TYPE = 'PRIMARY KEY'and c.TABLE_NAME = pk.TABLE_NAMEand c.CONSTRAINT_NAME = pk.CONSTRAINT_NAMESELECT @list = LEFT(@list, LEN(@list) -3)PRINT 'ON ( ' + @list + ')'-- WHEN MATCHED ------------------------------------------------------------------PRINT 'WHEN MATCHED THEN UPDATE SET'SELECT @list = '';SELECT @list = @list + ' [' + [name] + '] = s.[' + [name] +'],'from sys.columnswhere object_id = object_id(@table)-- don't update primary keysand [name] NOT IN (SELECT [column_name] from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @table and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME)-- and don't update identity columnsand columnproperty(object_id(@table), [name], 'IsIdentity ') = 0 --print @list PRINT left(@list, len(@list) -3 )-- WHEN NOT MATCHED BY TARGET ------------------------------------------------PRINT ' WHEN NOT MATCHED BY TARGET THEN';-- Get the insert listSET @list = ''SELECT @list = @list + '[' + [name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' INSERT(' + @list + ')'-- get the values listSET @list = ''SELECT @list = @list + 's.[' +[name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' VALUES(' + @list + ')'-- WHEN NOT MATCHED BY SOURCEprint 'WHEN NOT MATCHED BY SOURCE THEN DELETE; 'PRINT ''PRINT 'PRINT ''' + @table + ': '' + CAST(@@ROWCOUNT AS VARCHAR(100));';PRINT ''-- Set the identity insert OFF for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] OFF; 'PRINT ''PRINT 'GO'PRINT '';

    Read the article

  • Experimenting with other search engines

    - by Bill Graziano
    I’ve been a Google user so long I can hardly remember what I used before it.  Alta Vista maybe?  Or Yahoo.  I’ve tried Bing off and on but it never really stuck.  I probably care more about search engines than your average user because of their impact on SQLTeam.com.  Lately I’ve been trying two other search engines and actually switched to one of them. I’ve played with Blekko a little in the past.  They have some interesting ways to “slice up” your results.  For example, searching on “SQL Server /blogs /date” should just search all the recently updated blogs.  Those two extra words on the search are slashtags.  The full list of slashtags runs from /forums to just see forums to /twitter to /nikon to /reviews and on and on and on.  I laughed when I saw they had slashtags for both liberal and conservative.  I’d hate to find any search results that don’t match my existing worldview :)  You can also create your own slashtags.  I created a mini-search engine for the SQL Server blogs that I read.  You can search it for “backup” at http://blekko.com/ws/backup+/billgraziano/sql-sites.  I uploaded my OPML and it limited the search to just those sites.  It seems like the site is focusing more on curating results and less on algorithms.  This is an interesting site for those power searchers.  There are some great ways to curate results using slashtags.  For 99% of my searches (type words, click on one of the first few links) slashtags are overkill.  They do have some good information on page and site ranking though so I’ll probably send some time looking through that. Blekko recently got my attention again when they said they were banning “content farms” - and that includes eHow and experts-exchange.  I always feel used when I click on a link to EE and find myself scrolling all the way to the bottom to see if I can find the answer.  Sometimes it’s there but sometimes it tells me I need to pay first.  I’ve longed for a way to always exclude certain sites.  Blekko might be taking a hammer to a problem that needs a scalpel but it’s an interesting choice.  (And some of the comments in the TechCrunch link are interesting if you’re a search nerd.) DuckDuckGo is an odd name for a search engine.  Their big hook is that they don’t have search history.  If you wade through your Google account you can probably find the page where it stores your search history.  It was pretty enlightening to find mine.  It was easy to disable but that got me started looking at other search engines.  DDG (or DukGo) just feels like Google used to in the old days.  The results are good enough and the site is fast. Searches will return a snippet from WikiPedia or other site (like StackOverflow) at the top.  I think the idea is to answer the question without needing to visit the site.  I’m not sure that’s a good thing for SQLTeam.com. The only thing I really miss is image search.  You can add a “!i” at the end of any search and it will search the images on Bing.  Bing doesn’t have a great image search but it works for most of what I need.  They call these exclamation marks “!bangs” and they are kinda, sorta like slashtags.  I’ve been using DuckDuckGo now for a few weeks and I’m pretty happy with it.  I use Chrome for my browser and it was an easy switch to make.  It’s still a little surprising seeing my search results come up in a different format.  I’m starting to get used to it though.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >