Search Results

Search found 1470 results on 59 pages for 'rob mcafee'.

Page 11/59 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • T-SQL Tuesday - the swag

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Kendal van Dyke (@SQLDBA), and is on the topic of swag. He asks about the best SQL Server swag that we’ve ever received from a conference. I can’t say I ever focus on getting the swag at conferences, as I see some people doing. I know there are plenty of people that get around all the sponsors as soon as they’ve arrived, collecting whatever goodies they can, sometimes as token gifts for those at home, sometimes as giveaways for the user groups they attend. I remember a few years ago at my first PASS Summit, the SQLCAT team gave me a large pile of leftover SQL Server swag to give away to my user group – piles of branded things to stop your phone sliding off your car dashboard, and other things. The user group members thought it was great, and over the course of a few months, happily cleared me out of it all. I tend to consider swag to be something that you haven’t earned except by being at a conference, and there was no winning associated with it, it was simply a giveaway item at a sponsor booth. That means I don’t include the HP Mini laptop that was given away at TechEd Australia a few years ago to every attendee, or the SQL Server bag and Camelbak bottle that I was given as a thank-you for writing a guest blog post (which I use as my regular laptop bag and water bottle for work). I don’t even include the copy of Midtown Madness that I got as a door prize at my vey first TechEd event in 1999 (that was a really good game, and even meant that when I went to Chicago last year, I felt a strange familiarity about the place). I don’t want to include shirts in the mix either. I was given a nice SQL Server shirt about five years ago TechEd Australia. It’s a business shirt (buttons, cuffs, pocket on the chest), black with the SQL Server logo on it. It was such a nice shirt that I commented about it to the Product Marketing Manager for Australia (Christine, at the time), who unexpectedly arranged for me to get another one. That was certainly an improvement on the tent I was given at one of the MVP conference I attended. So when I consider these ‘rules’, two pieces of swag come to mind, and I think both were at PASS Summits (although I can’t be sure). One was a hand-warmer from HP, one of the “crystallisation-type” ones, which proved extremely popular when I got home, until one day when it didn’t survive being recharged – not overly SQL related, but still it was good swag. The other was an umbrella, from expressor, which was from the PASS Summit in 2010, my first PASS Summit. I remember it well – Blythe Morrow (now Gietz) (@blythemorrow) was working the booth, having stopped working for PASS some time before, but she’d been on my list of people to meet, as I’d had plenty of contact with her while she’d worked at PASS, my being a chapter leader and general volunteer. There had been an expressor dinner on one of the first evenings, which I’d been asked to be at, which is when I’d met lots of SQL people in person for the first time, including Ted Krueger (@onpnt), Jessica Moss (@jessicamoss) and Blythe. Anyway, at some point the next day I swung by their booth to say hello and thank them for the dinner, and Blythe says “Oh, we have the best swag – here!” and handed me an umbrella. And she was right. It’s excellent. @rob_farley

    Read the article

  • My Lightning Talk in MP3 format

    - by Rob Farley
    Download it now via http://bit.ly/RFCollation  Lots of people tell me they wish they’d heard my Lightning Talk from the PASS Summit. This was the one that was five minutes, in which I explained Collation using examples comparing US English, UK English and Australian English. At the end, I showed my Arsenal thongs. You can see a picture of them below. There was a visual joke involving the name Arsenal too... After the recordings became available, I asked the PASS legal people, and they said I could do what I liked with my own five-minute set, so long as I didn’t sell it. So I made an MP3. I’ve uploaded it to the LobsterPot Solutions web server, and provided an easy link via http://bit.ly/RFCollation. It’s a link straight to the MP3, and you’re welcome to download it, put it on your iPod, whatever you like. And also feel free to write comments here, to let me know what you think.

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • Top 10 Reasons to Use MySQL and MySQL Cluster as an Embedded Database

    - by Rob Young
    If you are considering using MySQL and/or MySQL Cluster as the embedded database solution for your application, you should join us for today's webcast where we will discuss how you can cut costs, add flexibility and benefit from new performance and scalability enhancements that are now available in MySQL 5.6 and MySQL Cluster 7.2.  We will cover the top 10 reasons that make MySQL and MySQL Cluster the best solutions for embedding in both shrink wrapped and SaaS provided applications, how industry leaders leverage MySQL products and how you can get started with the latest innovations and support offerings across the MySQL product line. You can learn more and reserve your seat here. As always, thanks for your support of MySQL!

    Read the article

  • PowerShell to fetch a SQL Execution Plan

    - by Rob Farley
    With PowerShell becoming the scripting language of choice for many people, I’ve occasionally wondered about using it to analyse execution plans. After all, an execution plan is just XML, and PowerShell is just one tool which will very easily handle xml. The thing is – there’s no Get-SqlPlan cmdlet available, which has frustrated me in the past. Today I figured I’d make one. I know that I can write T-SQL to get an execution plan using SET SHOWPLAN_XML ON, but the problem is that this must be the only statement in a batch. So I used go, and a couple of newlines, and whipped up the following one-liner: function Get-SqlPlan([string] $query, [string] $server, [string] $db) { return ([xml] (invoke-sqlcmd -Server $server -Database $db -Query "set showplan_xml on;`ngo`n$query").Item( 0)) } (but please bear in mind that I have the SQL Snapins installed, which provides invoke-sqlcmd) To use this, I just do something like: $plan = get-sqlplan "select name from Production.Product" "." "AdventureWorks" And then find myself with an easy way to navigate through an execution plan! At some point I should make the function more robust, but this should be a good starter for any SQL PowerShell enthusiasts (like Aaron Nelson) out there.

    Read the article

  • June 25 changes to BIS 742.15 How does it impact SSL iPhone App export compliance

    - by Rob
    This question isn't strictly development-related but I hope it's still acceptable :) On June 25, 2010 the BIS updated 742.15 and of interest to me is the new 742.14(b)(4) "Exclusions from mass market classification request, encryption registration and self-classification reporting requirements" and 742.15(b)(4)(ii) which states… (ii) Foreign products developed with or incorporating U.S.-origin encryption source code, components, or toolkits. Foreign products developed with or incorporating U.S. origin encryption source code, components or toolkits that are subject to the EAR, provided that the U.S. origin encryption items have previously been classified or registered and authorized by BIS and the cryptographic functionality has not been changed. Such products include foreign developed products that are designed to operate with U.S. products through a cryptographic interface. I take this to mean that my Canadian produced product that uses https is now excluded from requiring a CCATTS. What does everyone else think?

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • Ubuntu One Bookmark sync not working.

    - by Rob
    Everything in Ubuntu One sync works great except bookmark sync. I tried the wiki answer that said to run: killall beam.smp beam rm ~/.config/desktop-couch/desktop-couchdb.ini dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort This is what my terminal came back with: robin@robin-MIDWAY:~$ killall beam.smp beam beam: no process found robin@robin-MIDWAY:~$ rm ~/.config/desktop-couch/desktop-couchdb.ini rm: cannot remove `/home/robin/.config/desktop-couch/desktop-couchdb.ini': No such file or directory robin@robin-MIDWAY:~$ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. robin@robin-MIDWAY:~$ I'm a computer "newbie" so it's possible I'm doing something wrong, are there any tutorials out there on how to use the CouchDB? I have Bindwood installed.

    Read the article

  • Will upgrading disrupt online backup software (Crashplan)?

    - by Rob
    Has anyone had any experience upgrading to Ubuntu 11.04, and how this might affect online backup software processes? Specifically, I'm currently running Ubuntu 10.10 and using Crashplan v3. Crashplan backup engine runs constantly in the background to monitor files for changes and back up accordingly. Will upgrading to Ubuntu 11.04 affect any of the Crashplan backup process? Has anyone done this successfully? I'd like to upgrade to the latest and greatest Ubuntu release, but not at the risk of affecting the backup, or worse, having to start from scratch again and upload the entire library!

    Read the article

  • SQL Saturday #162 Cambridge

    - by Most Valuable Yak (Rob Volk)
    Despite the efforts of American Airlines, this past weekend I attended the first SQL Saturday in the UK!  Hosted by the SQLCambs Chapter of PASS and organized by Mark (b|t) & Lorraine Broadbent, ably assisted by John Martin (b|t), Mark Pryce-Maher (b|t) and other folks whose names I've unfortunately forgotten, it was held at the Crowne Plaza Hotel, which is completely surrounded by Cambridge University. On Friday, they presented 3 pre-conference sessions given by the brilliant American Cloud & DBA Guru, Buck Woody (b|t), the brilliant Danish SQL Server Internals Guru, Mark Rasmussen (b|t), and the brilliant Scottish Business Intelligence Guru and recent Outstanding Pass Volunteer, Jen Stirrup (b|t).  While I would have loved to attend any of their pre-cons (having seen them present several times already), finances and American Airlines ultimately made that impossible.  But not to worry, I caught up with them during the regular sessions and at the speaker dinner.  And I got back the money they all owed me.  (Actually I owed Mark some money) The schedule was jam-packed even with only 4 tracks, there were 8 regular slots, a lunch session for sponsor presentations, and a 15 minute keynote given by Buck Woody, who besides giving an excellent history of SQL Server at Microsoft (and before), also explained the source of the "unknown contact" image that appears in Outlook.  Hint: it's not Buck himself. Amazingly, and against all better judgment, I even got to present at SQL Saturday 162!  I did a 5 minute Lightning Talk on Regular Expressions in SSMS.  I then did a regular 50 minute session on Constraints.  You can download the content for the regular session at that link, and for the regular expression presentation here. I had a great time and had a great audience for both of my sessions.  You would never have guessed this was the first event for the organizers, everything went very smoothly, especially for the number of attendees and the relative smallness of the space.  The event sponsors also deserve a lot of credit for making themselves fit in a small area and for staying through the entire event until the giveaways at the very end. Overall this was one of the best SQL Saturdays I've ever attended and I have to congratulate Mark B, Lorraine, John, Mark P-M, and all the volunteers and speakers for making this an astoundingly hard act to follow!  Well done!

    Read the article

  • Coming to a City Near You: Oracle Business Analytics Summits

    - by Rob Reynolds
    More and more organizations use analytics to identify new business opportunities, reduce costs, and optimize business processes. How? By making business information available throughout the enterprise—and making sure that it is relevant, actionable, and easy to access.Oracle invites you to join us for an information-packed event where you’ll learn about the latest trends, best practices, and innovations in business intelligence, analytic applications, and data warehousing.If you are an IT professional involved in BI strategy, program management, systems management, architecture, or deployment, this event is for you. You’ll find out about: New ways of deploying and delivering business intelligence on premise, in the cloud, and on mobile devices to a diverse base of business users New approaches for integrating, storing, managing, securing, and accessing your ever-growing volumes of structured and unstructured data The latest strategies for dramatically increasing the ROI of your ERP and CRM deployments Click here to view the presentation abstracts. Agenda 9:00 a.m. Registration 10:00 a.m. Keynote: Business Analytics—Be the First to Know 11:00 a.m. Break Breakout Sessions Technology and Architecture Strategy Track Business Insight and Analytic Delivery Track 11:15 a.m. Emerging Trends in Enterprise BI Platforms 11:15 a.m. Mobile BI—More than Dashboards on a Tablet 12:00 noon Networking Lunch 12:00 noon Networking Lunch 1:00 p.m. Is Your Business Intelligence Data at Risk? 1:00 p.m. Geospatial Intelligence—Location, Location, Location! 1:45 p.m. What Extreme Performance Means for Your Business 1:45 p.m. The Role of BI in Your ERP and Performance Management Initiatives 2:30 p.m. Become a BI Architect 2:30 p.m. BI Applications: Step 1 in Your ERP Upgrade or Expansion 3:00 p.m. Partner Spotlight Registration links for each city are below: New York , NY- July 26 Miami, FL - July 27 Reston, VA, July 27 Atlanta, GA - July 28 Boston, MA - July 28 Rochester, NY - Aug 2 (event link coming soon!) Menlo Park, CA - August 2 Charlotte, NC - August 3 Newport Beach, CA - August 3 Register online at the links above or call 1.800.820.5592 ext. 9218 to reserve your place.

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • Postfix and tmpfs for /var/spool

    - by Rob Fisher
    My main disk is an SSD so in order to preserve its lifetime by reducing writes I followed some advice and made /var/spool a ram disk by adding this line to /etc/fstab: tmpfs /var/spool tmpfs defaults,noatime,mode=1777 0 0 Later I configured postfix because I have a RAID array on my system and mdadm wants to send me email if the RAID array fails which sounds like a fine idea. Email sending worked fine until I rebooted, at which point: postfix: fatal: open /etc/postfix-out/main.cf: No such file or directory The fix for this is apparently: mkdir /var/spool/postfix postfix check Then I found I also had to do: mkfifo /var/spool/postfix/public/pickup service postfix restart Now sending emails works fine...until the next reboot. So: what is the most correct way to recreate the contents of /var/spool/postfix automatically at boot time if it does not exist? I am using Ubuntu Server 12.04.

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • A view from the call center for the Nashville Flood telethon

    - by Rob Foster
    I want to break away from my usual topic of something technical and talk about what I experienced tonight while working in the call center for the Nashville Flood telethon, which was broadcast on WSMV, CNN, and The Weather Channel.  We started receiving calls about 7pm local time and to be honest, I had no idea what to expect when going into this.  I mean, I'm a pretty good talker, but this is different...We had a good script of what to say and how we were supposed to say it, as well as paper forms and pens that we used to collect information from people who wanted to donate their money to help.  I took my first few calls pretty easily and it went pretty quick and easy.  Everyone was upbeat and happy to be in the call center as well as people happy to be donating money. Pizza, snacks, and soft drinks were flowing well.  Everyone is smiling and happy.  :) About 3 or 4 calls into my night, I got a call from a lady that had lost 2 family members in West Nashville who drowned in the floods.  She was crying when she called and I of course tried to console her.  She told me how bad her situation was, losing family members and much of her neighborhood.  After all this, she still just wanted to help other people.  She was donating all the money that she could to the telethon and I want to share a direct quote from her: "I want to donate this instead of buying flowers for my family members' funeral because people out there need help.". Please let me pause while I get myself together <again>.  That caught me so off guard (and still does). I had kids calling wanting to donate their allowance, open their piggy banks, whatever they could do.  These are kids.  Kids not much older than my boys.  Kids who should be focused on buying the next cool video game or toy or whatever but wanted to do something.  Everyone just seemed to want to help. I took calls from as far away as British Columbia as well and pretty much coast to coast.  how cool is that? Yet another thing that caught me off guard.  This kind lady that called from British Columbia told me how much she loved visiting Nashville and just hated to see this happen.  I belive that she said that she will be attending the CMA Fest this year too.  I was sure to tell her not to cancel her plans!  :) It felt like every call I took (and I took A LOT, as did everyone else) was very personal and heartfelt.  I've never had the privelage to do anything like this and fell lucky to have been able to help out with answering phones and logging donations.  Nashville will bounce back very quickly, people are out there day and night helping each other, and the spirits are very high here.  I hope that one day, my kids read this blog and better understand who they are, where they come from, and what the human spirt is and can be.  I love this city, I love the people here, I love the culture and even more than ever am proud to say that this is me.  This is us.  We are Nashville!

    Read the article

  • 24 Hours of PASS scheduling

    - by Rob Farley
    I have a new appreciation for Tom LaRock (@sqlrockstar), who is doing a tremendous job leading the organising committee for the 24 Hours of PASS event (Twitter: #24hop). We’ve just been going through the list of speakers and their preferences for time slots, and hopefully we’ve kept everyone fairly happy. All the submitted sessions (59 of them) were put up for a vote, and over a thousand of you picking your favourites. The top 28 sessions as voted were all included (24 sessions plus 4 reserves), and duplicates (when a single presenter had two sessions in the top 28) were swapped out for others. For example, both sessions submitted by Cindy Gross were in the top 28. These swaps were chosen by the committee to get a good balance of topics. Amazingly, some big names missed out, and even the top ten included some surprises. T-SQL, Indexes and Reporting featured well in the top ten, and in the end, the mix between BI, Dev and DBA ended up quite nicely too. The ten most voted-for sessions were (in order): Jennifer McCown - T-SQL Code Sins: The Worst Things We Do to Code and Why Michelle Ufford - Index Internals for Mere Mortals Audrey Hammonds - T-SQL Awesomeness: 3 Ways to Write Cool SQL Cindy Gross - SQL Server Performance Tools Jes Borland - Reporting Services 201: the Next Level Isabel de la Barra - SQL Server Performance Karen Lopez - Five Physical Database Design Blunders and How to Avoid Them Julie Smith - Cool Tricks to Pull From Your SSIS Hat Kim Tessereau - Indexes and Execution Plans Jen Stirrup - Dashboards Design and Practice using SSRS I think you’ll all agree this is shaping up to be an excellent event.

    Read the article

  • Handy SQL Server Functions Series (HSSFS) Part 2.0 - Prelude to Parsing Patterns Properly

    - by Most Valuable Yak (Rob Volk)
    In Part 1 of the series I wrote about 2 lesser-known and somewhat undocumented functions. In this part, I'm going to cover some familiar string functions like Substring(), Parsename(), Patindex(), and Charindex() and delve into their strengths and weaknesses. I'm also splitting this part up into sub-parts to help focus on a particular technique and/or problem with the technique, hence the Part 2.0. Consider this a composite post, or com-post, if you will. (It may just turn out to be a pile of sh_t after all) I'll be using a contrived example, perhaps the most frustratingly useful, or usefully frustrating, function in SQL Server: @@VERSION. Contrived, because there are better ways to get the information (which I'll cover later); frustrating, because of the way Microsoft formatted the value; and useful because it does have 1 or 2 bits of information not found elsewhere. First let's take a look at the output of @@VERSION: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3) There are 4 lines, with lines 2-4 indented with a tab character.  In case your browser (or this blog software) doesn't show it correctly, I gave each line a different color.  While this PRINTs nicely, if you SELECT @@VERSION in grid mode it all runs together because it ignores carriage return/line feed (CR/LF) characters.  Not fatal, but annoying. Note that @@VERSION's output will vary depending on edition and version of SQL Server, and also the OS it's installed on.  Despite the differences, the output is laid out the same way and the relevant pieces are in the same order. I'll be using the following view for Parts 2.1 onward, so we have a nice collection of @@VERSION information: create view version(SQLVersion,VersionString) AS ( select 2000, 'Microsoft SQL Server 2000 - 8.00.2055 (Intel X86) Dec 16 2008 19:46:53 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86) May 26 2009 14:24:20 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 3)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Developer Edition on Windows NT 5.1 <X86> (Build 2600: Service Pack 3)' union all select 2005, 'Microsoft SQL Server 2005 - 9.00.3080.00 (Intel X86) Sep 6 2009 01:43:32 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' union all select 2008, 'Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (Hypervisor)' ) Feel free to add your own @@VERSION info if it's not already there. In Part 2.1 I'll focus on extracting the SQL Server version number (10.50.1600.1 in first example) and the Edition (Developer), but will have a solution that works with all versions.  Stay tuned!

    Read the article

  • MCM Lab exam this week

    - by Rob Farley
    In two days I’ll’ve finished the MCM Lab exam, 88-971. If you do an internet search for 88-971, it’ll tell you the answer is –883. Obviously. It’ll also give you a link to the actual exam page, which is useful too, once you’ve finished being distracted by the calculator instead of going to the thing you’re actually looking for. (Do people actually search the internet for the results of mathematical questions? Really?) The list of Skills Measured for this exam is quite short, but can essentially be broken down into one word “Anything”. The Preparation Materials section is even better. Classroom Training – none available. Microsoft E-Learning – none available. Microsoft Press Books – none available. Practice Tests – none available. But there are links to Readiness Videos and a page which has no resources listed, but tells you a list of people who have already qualified. Three in Australia who have MCM SQL Server 2008 so far. The list doesn’t include some of the latest batch, such as Jason Strate or Tom LaRock. I’ve used SQL Server for almost 15 years. During that time I’ve been awarded SQL Server MVP seven times, but the MVP award doesn’t actually mean all that much when considering this particular certification. I know lots of MVPs who have tried this particular exam and failed – including Jason and Tom. Right now, I have no idea whether I’ll pass or not. People tell me I’ll pass no problem, but I honestly have no idea. There’s something about that “Anything” aspect that worries me. I keep looking at the list of things in the Readiness Videos, and think to myself “I’m comfortable with Resource Governor (or whatever) – that should be fine.” Except that then I feel like I maybe don’t know all the different things that can go wrong with Resource Governor (or whatever), and I wonder what kind of situations I’ll be faced with. And then I find myself looking through the stuff that’s explained in the videos, and wondering what kinds of things I should know that I don’t, and then I get amazingly bored and frustrated (after all, I tell people that these exams aren’t supposed to be studied for – you’ve been studying for the last 15 years, right?), and I figure “What’s the worst that can happen? A fail?” I’m told that the exam provides a list of scenarios (maybe 14 of them?) and you have 5.5 hours to complete them. When I say “complete”, I mean complete – you don’t get to leave them unfinished, that’ll get you ‘nil points’ for that scenario. Apparently no-one gets to complete all of them. Now, I’m a consultant. I get called on to fix the problems that people have on their SQL boxes. Sometimes this involves fixing corruption. Sometimes it’s figuring out some performance problem. Sometimes it’s as straight forward as getting past a full transaction log; sometimes it’s as tricky as recovering a database that has lost its metadata, without backups. Most situations aren’t a problem, but I also have the confidence of being able to do internet searches to verify my maths (in case I forget it’s –883). In the exam, I’ll have maybe twenty minutes per scenario (but if I need longer, I’ll have to take longer – no point in stopping half way if it takes more than twenty minutes, unless I don’t see an end coming up), so I’ll have time constraints too. And of course, I won’t have any of my usual tools. I can’t take scripts in, I can’t take staff members. Hopefully I can use the coffee machine that will be in the room. I figure it’s going to feel like one of those days when I’ve gone into a client site, and found that the problems are way worse than I expected, and that the site is down, with people standing over me needing me to get things right first time... ...so it should be fine, I’ve done that before. :) If I do fail, it won’t make me any less of a consultant. It won’t make me any less able to help all of my clients (including you if you get in touch – hehe), it’ll just mean that the particular problem might’ve taken me more than the twenty minutes that the exam gave me. @rob_farley PS: Apparently the done thing is to NOT advertise that you’re sitting the exam at a particular time, only that you’re expecting to take it at some point in the future. I think it’s akin to the idea of not telling people you’re pregnant for the first few months – it’s just in case the worst happens. Personally, I’m happy to tell you all that I’m going to take this exam the day after tomorrow (which is the 19th in the US, the 20th here). If I end up failing, you can all commiserate and tell me that I’m not actually as unqualified as I feel.

    Read the article

  • Improve efficiency of web building setup and processes - Wordpress on Mac

    - by Rob
    Can anyone see any ways in which I can improve my speed and efficiency with the following setup? Or if there are any obvious holes in my building process? This is for building Wordpress websites on Mac: 1) I have a standard Wordpress setup that I work from which includes various plugins that I tend to use across all setups - thus cutting out the step of having to download them all the time! 2) My standard WP files are copied into a Dropbox folder - thus creating backups of the files. 3) I then open up MAMP and setup a local version. 4) I open up Coda and setup the FTP details so files can be uploaded to the live domain by using the publish button. If anyone has any advice on how to improve this process then please let me know!

    Read the article

  • Buck Woody in Adelaide via LiveMeeting

    - by Rob Farley
    The URL for attendees is https://www.livemeeting.com/cc/usergroups/join?id=ADL1005&role=attend . This meeting is with Buck Woody . If you don’t know who he is, then you ought to find out! He’s a Program Manager at Microsoft on the SQL Server team, and anything else I try to say about him will not do him justice. So it’s great to have him present to the Adelaide SQL Server User Group this week. The talk is on the topic of Data-Tier Applications (new in SQL 2008 R2), and I’m sure it will be a great...(read more)

    Read the article

  • Do you ever worry that you're more concerned with how something is built rather than what you are actually building?

    - by Rob Stevenson-Leggett
    As a programmer I have an inherent nagging annoyance at my tools, other peoples code, my code, the world in general. I always want to improve it. So I refactor, I stay on top of the latest techniques. I try and learn patterns, I try to use frameworks so as not to reinvent the wheel. I can write a tech spec that will blow your socks off with the amount of patterns I can squeeze in. However, lately I feel I actually know more about the tools I use than how to actually implement successful software. I feel like I'm lacking in the human factors skill set and I believe that to be a successful software engineer takes more than knowing the coolest framework. I think it needs some of the following skillsets too. Interaction design User experience Marketing I've got a bit of this that I've learned from people I've worked with and great projects I've worked on but I don't feel like I "own" these skills. Am I right? Should I be trying to develop these skills further, or should these be left to the people who do these for a career? How do you make sure you don't get too tied up in how you're doing something and make sure you "make your users awesome"? Does anyone know of good resources for learning these skills from a programming point of view?

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Con Scholarship Winners!

    - by Most Valuable Yak (Rob Volk)
    A few weeks ago, AtlantaMDF offered scholarships for each of our upcoming Pre-conference sessions at SQL Saturday #220. We would like to congratulate the winners! David Thomas SQL Server Security http://sqlsecurity.eventbrite.com/ Vince Bible Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Mostafa Maged Languages of BI http://languagesofbi.eventbrite.com/ Daphne Adams Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Tim Lawrence The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ Thanks to everyone who applied! And once again we must thank Idera's generous sponsorship, and the time and effort made by Bobby Dimmick (w|t) and Brian Kelley (w|t) of Midlands PASS for judging all the applicants. Don't forget, there's still time to attend the Pre-Cons on May 17, 2013! Click on the EventBrite links for more details and to register!

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • Should I be using Lua for game logic on mobile devices?

    - by Rob Ashton
    As above really, I'm writing an android based game in my spare time (android because it's free and I've no real aspirations to do anything commercial). The game logic comes from a very typical component based model whereby entities exist and have components attached to them and messages are sent to and fro in order to make things happen. Obviously the layer for actually performing that is thin, and if I were to write an iPhone version of this app, I'd have to re-write the renderer and core driver (of this component based system) in Objective C. The entities are just flat files determining the names of the components to be added, and the components themselves are simple, single-purpose objects containing the logic for the entity. Now, if I write all the logic for those components in Java, then I'd have to re-write them on Objective C if I decided to do an iPhone port. As the bulk of the application logic is contained within these components, they would, in an ideal world, be written in some platform-agnostic language/script/DSL which could then just be loaded into the app on whatever platform. I've been led to believe however that this is not an ideal world though, and that Lua performance etc on mobile devices still isn't up to scratch, that the overhead is too much and that I'd run into troubles later if I went down that route? Is this actually the case? Obviously this is just a hypothetical question, I'm happy writing them all in Java as it's simple and easy get things off the ground, but say I actually enjoy making this game (unlikely, given how much I'm currently disliking having to deal with all those different mobile devices) and I wanted to make a commercially viable game - would I use Lua or would I just take the hit when it came to porting and just re-write all the code?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >