Search Results

Search found 5162 results on 207 pages for 'bill white'.

Page 26/207 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How can I selectively update XNA GameComponents?

    - by Bill
    I have a small 2D game I'm working on in XNA. So far, I have a player-controlled ship that operates on vector thrust and is terribly fun to spin around in circles. I've implemented this as a DrawableGameComponent and registered it with the game using game.Components.Add(this) in the Ship object constructor. How can I implement features like pausing and a menu system with my current implementation? Is it possible to set certain GameComponents to not update? Is this something for which I should even be using a DrawableGameComponent? If not, what are more appropriate uses for this?

    Read the article

  • Visiting China

    - by Bill Graziano
    This summer I had the chance to visit China.  My brother and his wife are living in China and teaching English.  I spent a little over two weeks in Shanghai, Suzhou and Yancheng.  During that time I wrote some detailed updates for family and a few close friends on the impressions of a good Midwestern kid visiting the Middle Kingdom. I dumped them all into one document, did a little editing and now they’re posted.  You can download it here.  Below you can see my futile attempts to eat using chopsticks and me posing as a tourist on Nanjing Road in Shanghai.  The only thing I can say about chopsticks is that I didn’t starve.

    Read the article

  • Default User Id at Login different from User Name in Terminal Shell

    - by Bill
    During the Ubuntu 12.04 LTS installation, I was prompted to enter a user name and password, so that a corresponding account could be created and set up for login. I replaced the one that was provided by default (i.e. '70319', which is the Windows 7 admin id) with a user name/id of my choosing. Now, when I turn on the computer, and choose to enter the Ubuntu operating system, the login id that is displayed is 70319 - that is, the one provided by Windows 7. However, when I open up a Unix/Terminal shell, the user id that is displayed at the prompt is the one I entered during installation. Otherwise, the installation of Ubuntu was a success! Is there some way of changing the user id that is displayed at the Login screen, so that it is consistent with the one I entered during installation? If it's any help, I installed Ubuntu using wubi on an ASUS Eee PC 1011PX running Windows 7, and ASUS Express Gate Cloud. Further details regarding the setup/installation can be found at the following link: Installing Ubuntu on an Eee PC 1011PX

    Read the article

  • PASS: The Legal Stuff

    - by Bill Graziano
    I wanted to give a little background on the legal status of PASS.  The Professional Association for SQL Server (PASS) is an American corporation chartered in the state of Illinois.  In America a corporation has to be chartered in a particular state.  It has to abide by the laws of that state and potentially pay taxes to that state.  Our bylaws and actions have to comply with Illinois state law and United States law.  We maintain a mailing address in Chicago, Illinois but our headquarters is currently in Vancouver, Canada. We have roughly a dozen people that work in our Vancouver headquarters and 4-5 more that work remotely on various projects.  These aren’t employees of PASS.  They are employed by a management company that we hire to run the day to day operations of the organization.  I’ll have more on this arrangement in a future post. PASS is a non-profit corporation.  The term non-profit and not-for-profit are used interchangeably.  In a for-profit corporation (or LLC) there are owners that are entitled to the profits of a company.  In a non-profit there are no owners.  As a non-profit, all the money earned by the organization must be retained or spent.  There is no money that flows out to shareholders, owners or the board of directors.  Any money not spent in furtherance of our mission is retained as financial reserves. Many non-profits apply for tax exempt status.  Being tax exempt means that an organization doesn’t pay taxes on its profits.  There are a variety of laws governing who can be tax exempt in the United States.  There are many professional associations that are tax exempt however PASS isn’t tax exempt.  Because our mission revolves around the software of a single company we aren’t eligible for tax exempt status. PASS was founded in the late 1990’s by Microsoft and Platinum Technologies.  Platinum was later purchased by Computer Associates. As the founding partners Microsoft and CA each have two seats on the Board of Directors.  The other six directors and three officers are elected as specified in our bylaws. As a non-profit, our bylaws layout our governing practices.  They must conform to Illinois and United States law.  These bylaws specify that PASS is governed by a Board of Directors elected by the membership with two members each from Microsoft and CA.  You can find our bylaws as well as a proposed update to them on the governance page of the PASS web site. The last point that I’d like to make is that PASS is completely self-funded.  All of our $4 million in revenue comes from conference registrations, sponsorships and advertising.  We don’t receive any money from anyone outside those channels.  While we work closely with Microsoft we are independent of them and only derive a very small percentage of our revenue from them.

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Android&ndash;Finding your SDK debug certificate MD5 fingerprint using Keytool

    - by Bill Osuch
    I recently upgraded to a new development machine, which means the certificate used to sign my applications during debug changed. Under most circumstances you’ll never notice a difference, but if you’re developing apps using Google’s Maps API you’ll find that your old API key no longer works with the new certificate fingerprint. Google's instructions walk you through retrieving the MD5 fingerprint of your SDK debug certificate - the certificate that you’re probably signing your apps with before publishing, but it doesn't talk much about the Keytool command. The thing to remember is that Keytool is part of Java, not the Android SDK, so you'll never find it searching through your Android and Eclipse directories. Mine is located in C:\Program Files\Java\jdk1.7.0_02\bin so you should find yours somewhere similar. From a command prompt, navigate to this directory and type: keytool -v -list -keystore "C:/Documents and Settings/<user name>/.android/debug.keystore" That’s assuming the path to your debug certificate is in the typical location. If this doesn’t work, you can find out where it’s located in Eclipse by clicking Window –> Preferences –> Android –> Build. There's no need to use the additional commands shown on Google's page. You'll be prompted for a password, just hit enter. The last line shown, Certificate fingerprint, is the key you'll give Google to generate your new Maps API key. Technorati Tags: Android Mapping

    Read the article

  • Enter comments on queries in TraceTune

    - by Bill Graziano
    I’m trying to make TraceTune (and eventually ClearTrace) work the way I do.  My typical query tuning session goes like this: Run a trace and upload to TraceTune/ClearTrace Tune the slowest queries Goto 1 I might do this two or three times in one day and then not come back to it again for weeks or even months.  This is especially true for those clients that I only visit a few times per month.  In many cases I’ll look at a query, decide I can’t do much with it and move on.  I needed a way to capture that information. TraceTune now lets you enter a comment for a query.  It can be as simple or as complex as you like.  The comment will be shown inline with the execution history of that query. This should let you walk back through your history with a query and decide whether you should spend more time tuning it.

    Read the article

  • Powershell Active Directory Account Attribute to a variable

    - by Bill Garrett
    Sorry for the newbie question. I am using Powershell 3 to get a list of all user accounts. I am trying to generate an output for accounts, either "Enabled" or "Disabled". I am able to get the account status code from active directory using: $rc = $Rech.PropertiesToLoad.Add("userAccountControl"); That will display the correct account status code. When I try to use an if statement on the value, I dont get any result. How do I put this value into a variable to use some logic with it? In the end, my requirements are to have the output to an CSV file that I can send to HR and have them examin it and instead of a code I would like to have it say "Enabled" or "Disabled". Thank you.

    Read the article

  • ClearTrace Shows Execution History

    - by Bill Graziano
    The latest release of ClearTrace (Build 38) now shows the execution history of a particular statement. You’ll need to save the trace files to a trace group instead of just using the default.  That’s as easy as typing something into the trace group name when you upload the trace.  I usually put the server name in this field. Build 38 also re-enables support for statement level events.  If your trace includes RPC:StmtCompleted or SQL:StmtCompleted events those will be processed and save.  In the results tab you can choose to view statement level or batch level events.  Please note that saving statement level events in a trace can generate HUGE trace files very quickly.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Looking for a CDN

    - by Bill
    Most of the CDN's that I've seen require you to upload your content in advance. I'm looking for a CDN that, upon receiving a request for a resource it hasn't seen, will contact my application server. If the application server returns something, it should be sent to the user and then cached in the CDN. If not, it should just return a 404. If the user requests an unexpired item, the CDN should just serve it without bothering my app server. Does anything like this exist? Is there a way to get Cloudfront to work like this?

    Read the article

  • BizTalk&ndash;Mapping repeating EDI segments using a Table Looping functoid

    - by Bill Osuch
    BizTalk’s HIPAA X12 schemas have several repeating date/time segments in them, where the XML winds up looking something like this: <DTM_StatementDate> <DTM01_DateTimeQualifier>232</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120301</DTM02_ClaimDate> </DTM_StatementDate> <DTM_StatementDate> <DTM01_DateTimeQualifier>233</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120302</DTM02_ClaimDate> </DTM_StatementDate> The corresponding EDI segments would look like this: DTM*232*20120301~ DTM*233*20120302~ The DateTimeQualifier element indicates whether it’s the start date or end date – 232 for start, 233 for end. So in this example (an X12 835) we’re saying the statement starts on 3/1/2012 and ends on 3/2/2012. When you’re mapping from some other data format, many times your start and end dates will be within the same node, like this: <StatementDates> <Begin>20120301</Begin> <End>20120302</End> </StatementDates> So how do you map from that and create two repeating segments in your destination map? You could connect both the <Begin> and <End> nodes to a looping functoid, and connect its output to <DTM_StatementDate>, then connect both <Begin> and <End> to <DTM_StatementDate> … this would give you two repeating segments, each with the correct date, but how to add the correct qualifier? The answer is the Table Looping Functoid! To test this, let’s create a simplified schema that just contains the date fields we’re mapping. First, create your input schema: And your output schema: Now create a map that uses these two schemas, and drag a Table Looping functoid onto it. The first input parameter configures the scope (or how many times the records will loop), so drag a link from the StatementDates node over to the functoid. Yes, StatementDates only appears once, so this would make it seem like it would only loop once, but you’ll see in just a minute. The second parameter in the functoid is the number of columns in the output table. We want to fill two fields, so just set this to 2. Now drag the Begin and End nodes over to the functoid. Finally, we want to add the constant values for DateTimeQualifier, so add a value of 232 and another of 233. When all your inputs are configured, it should look like this: Now we’ll configure the output table. Click on the Table Looping Grid, and configure it to look like this: Microsoft’s description of this functoid says “The Table Looping functoid repeats with the looping record it is connected to. Within each iteration, it loops once per row in the table looping grid, producing multiple output loops.” So here we will loop (# of <StatementDates> nodes) * (Rows in the table), or 2 times. Drag two Table Extractor functoids onto the map; these are what are going to pull the data we want out of the table. The first input to each of these will be the output of the TableLooping functoid, and the second input will be the row number to pull from. So the functoid connected to <DTM01_DateTimeQualifier> will look like this: Connect these two functoids to the two nodes we want to populate, and connect another output from the Table Looping functoid to the <DTM_StatementDate> record. You should have a map that looks something like this: Create some sample xml, use it as the TestMap Input Instance, and you should get a result like the XML at the top of this post. Technorati Tags: BizTalk, EDI, Mapping

    Read the article

  • What should I wear to a job interview with a game development company?

    - by Bill
    Many game development companies are less formal in terms of workplace attire than other types of software development houses. For example, I know that one place at which I will be interviewing soon has a predominant workplace culture of jeans and polos or t-shirts. Should I wear a suit? Shirt and tie? Shirt and sport jacket, with or without tie? I want to show that I'm serious about the job, but that I understand the culture, too.

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • PASS Budget Posted

    - by Bill Graziano
    If you’re a member of PASS you can view our FY2011 budget at http://www.sqlpass.org/AboutPASS/Governance.aspx.  Our detailed budget is 29 pages long and provides an incredibly detailed snapshot of where our money comes from and how we spend it.  I’ve also written a summary highlighting some of the changes from last year.  If you have any questions about the budget you can ask them here or on the PASS site.

    Read the article

  • Samba network sharing NTFS drives and root permissions from local drives

    - by Bill
    I'm able to share my internal 2ndry NTFS drives (sdb1,2 and 3) on the network with Windows computers now but even though Samba read/write is enabled, Windows network computers can only open files "read-only" and can't save files to the samba shared drives/folders. I try to set permissions in Ubuntu via folder and/or file properties even logged in root via Nautilus but all the samba shared folders and files are set as owner = root, accessible and does not allow me to change them to read/write, it just resets to root, accessible, in other words, I can't change permissions. I'm running Ubuntu 11.04 Gnome on an old Dell Dimension 2400. Also, in order to for me to copy or move any files from the Ubuntu drive to the sdb1,2 or 3 drives, I have to gksu nautilus. This consequently prevents me from copying .ISO files to my "Multisys" thumb drive too.

    Read the article

  • One of my user accounts logs in without desktop environment

    - by Bill Cheatham
    When I log in to my main user account on Ubuntu 11.10 ,the desktop environment (unity bar, clock, volume control, etc.) is not there. All I have is the desktop background with a menu bar across the top which appears to be for nautilus (options like File-New folder). My other accounts log in like normal. I have recently followed these instructions to give my main user account access to an OSX partition, but I think I have logged in successfully since then. I am able to get a terminal by pressing ctrl+alt+t, but when I typed unity the whole thing crashed. Is there anything I can do to fix this? I have a separate administrator account I can use if needed.

    Read the article

  • How do I fix dragging windows to the adjacent workspace?

    - by Bill O'Dwyer
    I installed CompizConfig-Settings-Manager and I put on all the settings I liked and had in 11.10, including the ability to drag my windows to the adjacent workspace. It's under the Desktop Wall section, on the Edge Flipping tab and I've checked "Edge Flip Move" and "Edge Flip DnD." In 11.10, the movement was smooth between each workspace, and the window would still be "grabbed" in the same place. In 12.04, it's leaving the window behind and the mouse appears to be "grabbing" nothing, but I'm still holding onto the window, and I can still move and place it within the workspace (or indeed the previous workspace as it won't appear in the desired place until I drag the mouse all the way to the edge of the screen). Any way to fix this? I'm running 12.04 beta 2.

    Read the article

  • Help make the next Summit even better

    - by Bill Graziano
    After the Summit we send out a survey to capture feedback.  We ask a consistent set of questions so we get good year over year results.  I’ve watched blog posts and email threads with ideas for a better Summit.  I got to sit with Denny and crew again on Saturday night and talk about what worked and what didn’t.  We’d like to capture those ideas in a way that you can vote on what’s important to you.  Please take a second and visit http://feedback.sqlpass.org/.  You can make suggestions, vote on the ideas already posted and add your own comments.  Help PASS make next year’s Summit “The Best Summit Ever!”

    Read the article

  • Tracking Redirects Leading to your site

    - by Bill
    Is there a way in which I can find out if a user arrived at my site via a redirect? Here's an example: There are two sites, first.com & second.com. Any request to first.com will do a 302 redirect to second.com. When the request at second.com arrives, is there anyway to know it was redirected from first.com? Note that in this example you have no control over first.com. (In fact, it could be something bad, like kiddieporn.com.) Also note, because it is a redirect, it will not be in the HTTP referrer header.

    Read the article

  • Join me to register for the Summit

    - by Bill Graziano
    This year the Summit registration opens at 6PM on Sunday at the Seattle convention center.  Last year we had a dozen people hanging out, watching the twitter feed on the big monitor and catching up.  All we really needed was a bar and we’d have our own little party going. So this year I’m adding a bar.  I’ve arranged for a cash bar and some stand up tables.  I’m buying the first round for the first 40 or so people that come by.  Come by, register and say Hi.  I’d especially like to encourage first-time attendees to stop by.  This is a low key way to meet some people that will be at the conference.

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • PASS: SQLRally Thoughts

    - by Bill Graziano
    The PASS Board recently decided that we wouldn’t put another US-based SQLRally on the calendar until we had a chance to review the program. I wanted to provide some of my thinking around this. Keep in mind that this is the opinion of one Board member. The Board committed to complete two SQLRally events to determine if an event modeled between SQL Saturday and the Summit was viable. We’ve completed the two events and now it’s time to step back and review the program. This is my seventh year on the PASS Board. Over that time people have asked me why PASS does certain things. Many, many times my answer has been “Because that’s the way we did it last year”. And I am tired of giving that answer. We need to take a step back and review the US-based SQLRally before we schedule another one. It would be irresponsible for me as a Board member to commit resources to this without validating that what we’re doing makes sense for the organization and our members. I have no doubt that this was a great event for the attendees. We just need to validate it’s the best use of our resources. Please keep in mind that we haven’t cancelled the event. We’ve just said we need to review it before scheduling another one. My opinion is that some fairly serious changes are needed to the model before we consider it again – IF we do it again. I’ve come to that conclusion after speaking with the Dallas organizers, our HQ team, our Marketing team, other Board members (including one of the Orlando organizers), attendees in Orlando and Dallas and visiting other similar events. I should point out that their views aren’t unanimous on nearly any part of this event -- which is one of the reasons I want to take some time and think about this before continuing. I think it’s helpful to look at the original goals of what we were trying to accomplish. Andy Warren wrote these up in August of 2010. My summary of these goals and some thoughts on each one is below. Many of these thoughts revolve around the growth of SQL Saturdays. In the two years since that document was written these events have grown significantly. The largest SQL Saturdays are now over 500 people which mean they are nearly the same size as our recent SQLRally. Our goals included: Geographic diversity. We wanted an event in an area of the country that was away from any given Summit location. I think that’s still a valid goal. But we also have SQL Saturdays all over the country. What does SQLRally bring to this that SQLSaturday doesn’t? Speaker growth. One of the stated goals was to build a “farm club” for speakers. This gives us a way for speakers to work up to speaking at Summit by speaking in front of larger crowds. What does SQLRally bring to this that the larger SQL Saturdays aren’t providing? Pre-Conference speakers is one obvious answer here. Lower price. On a per-day basis, SQLRally is roughly 1/4th the price of the Summit. We wanted a way for people to experience something Summit-like at a lower price point. The challenge is that we are very budget constrained at that lower price point. International Event Model.  (I need to write more about this but I’m out of time.  I’ll cover it in the next installment.) There are a number of things I really like about SQLRally. I love the smaller conferences. They give me a chance to meet more people than at something the size of Summit. I like the two day format. That gives you two evenings to be at social events with people. Seeing someone a second day is a great way to build a bond with that person. That’s more difficult to do at a SQL Saturday. We also need to talk about the financial aspects of the event. Last year generated a small $17,000 profit on revenues of $200,000. Percentage-wise that’s reasonable but on an absolute basis it’s not a huge amount in our budget. We think this year will lose between $30,000 and $50,000 and take roughly 1,000 hours of HQ time. We don’t have detailed financials back yet but that’s our best guess at this point. Part of that was driven by using a convention center instead of a hotel. Until we get detailed financials back we won’t have the full picture around the financial impact. This event also takes time and mindshare from our Marketing team. This may sound like a small thing but please don’t underestimate it. Our original vision for this was something that would take very little time from our Marketing team and just a few mentions in the Connector. It turned out to need more than that. And all those mentions and emails take up space we could use to talk about other events and other programs. Last I wanted to talk about some of the things I’m thinking about. I don’t think it’s as simple as saying if we just fix “X” it all gets better. Is this that much better of an event than SQL Saturdays? What if we gave a few SQL Saturdays some extra resources? When SQL Saturdays were around 250 people that wasn’t as viable. With some of those events over 500 we need to reconsider this. We need to get back to a hotel venue. That will help with cost and networking. Is this the best use of the 1,000 HQ hours that we invested in the event? Is our price-point correct? I’m leaning toward raising our price closer to Summit on a per-day basis. I think this will let us put on a higher quality event and alleviate much of the budget pressure. Should growing speakers be a focus? Having top-line pre-conference speakers helps market the event. It will also have an impact on pricing and overall profit. We should also ask if it actually does grow speakers. How many of these people will eventually register for Summit? Attend chapters? Is SQLRally a driver into PASS or is it something that chapters, etc. drive people to? Should we have one paid day and one free instead of two paid days? This is a very interesting model that is used by SQLBits in the UK. This gives you the two day aspect as well as offering options for paid and free attendees. I’m very intrigued by this. Should we focus on a topic? Buried in the minutes is a discussion of whether PASS should have a Business Analytics conference separate from Summit. This is an interesting question to consider. Would making SQLRally be focused on a particular topic make it more attractive? Would that even be a SQLRally? Can PASS effectively manage the two events? (FYI - Probably not.) Would it help differentiate it from Summit and SQL Saturday? These are all questions that I think should be asked and answered before we do this event again. And we can’t do that if we don’t take time to have the discussion. I wanted to get this published before I take off for a few days of vacation. When I get back I’d like to write more about why the international events are different and talk about where we go from here.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >