Search Results

Search found 1513 results on 61 pages for 'adam heath'.

Page 4/61 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Do professional software developers still dream of creating industry/world-changing apps?

    - by Andrew Heath
    I'm a hobby programmer. The absence of real world deadlines, customer feedback, or performance reviews leaves me free to daydream about having and implementing The Next Great Idea That Changes the World. Of course I'm aware I probably have a better chance of winning the lottery, but it's fun to imagine knocking out some fully-homebrewed app that destroys the status quo. I know many professional programmers have side projects, some for profit others not. I was wondering on the way to work this morning (non-IT boring work) if having to code for your food tended to dampen the dreaming? Does greater experience leave you jaded and more focused on the projects at hand? Not trying to be a downer, just interested in the mindset of the real software professional :-)

    Read the article

  • Adding a forum to an existing site

    - by Andrew Heath
    I've got a site with ~500 registered members, 300 of which are what you'd call "active". Site data is kept in a MySQL dbase. I'd like to add a myBB forum to the site, but this question applies to any forum really. What I very much want to avoid is requiring my users to register both on the site and on the forum because my userbase is not technically literate and this would confuse a lot of them. However the forum software has its own registration, login, cookie, and password management system which naturally are different from the site's mechanics. I envision the following possibilities: install myBB into the existing database and customize the login code to unify the two systems. This would probably mean changing the site's code to use the myBB system as that would likely be less painful to refactor and wouldn't hurt future myBB upgrade ability. install myBB into separate database and write a bridging script of some sort that auto-registers existing site users with the forum if they elect to participate. Also check new forum registrations against the site's username list to prevent newcomers from taking existing names. run them fully separate and force users to re-register (easiest for ME, but least desirable for them) I would like a suggested course of action from those who have trod this path before... Thank you.

    Read the article

  • Setting up UPS monitoring

    - by Andrew Heath
    I have acquired a second hand Uninterrupted Power Supply (UPS) that I have refurbished (new battery) and hope to use with my Ubuntu 12.10 system. It's a SOLA 330 with serial out. I have installed NUT Metapackage and NUT Monitor from Software Centre, but am not sure how to go about setting it all up. A Google search brings up several ways of configuring Network UPS Tools (NUT) or HAL-Drivers, however, HAL-Drivers appears to be obsolete and many commands and config files mentioned to edit do not exist in 12.10 or the current version of NUT (most articles are a few years old). One tutorial seemed to work except the Error: no UPS definitions found in ups.conf even though ups.conf has values in it as laid out in the tutorial. How do I go about setting my system to monitor the UPS for a shut down signal? Also, is there a command to determine the UPS is communicating through the serial connection and on what port (to help with setup and configuring, eg. /dev/ttyS0 is mentioned in one of the tutorials I read).

    Read the article

  • Is php|architect any good?

    - by Andrew Heath
    Kind of a hard topic to search for, as architect turns up a lot about software architects instead. After 8 months of PHP self-study, I finally stumbled across the php|architect site. The length of time it took me to find it makes me suspicious of its quality. 3 related questions: do professional PHP coders read/care about php|architect? is it a good source for PHP beginners? assuming yes to either of the above, how far back in the archives to articles remain relevant? (ex: does stuff written about PHP4 still matter?)

    Read the article

  • Is php|architect any good?

    - by Andrew Heath
    Kind of a hard topic to search for, as architect turns up a lot about software architects instead. After 8 months of PHP self-study, I finally stumbled across the php|architect site. The length of time it took me to find it makes me suspicious of its quality. 3 related questions: do professional PHP coders read/care about php|architect? is it a good source for PHP beginners? assuming yes to either of the above, how far back in the archives to articles remain relevant? (ex: does stuff written about PHP4 still matter?)

    Read the article

  • handling a GET error properly

    - by Andrew Heath
    I have a website that takes two primary get strings: ?type=GAME&id=SomeGameID ?type=SCENARIO&id=SomeScenarioID for reasons unknown, I have recently begun receiving requests for erroneous get strings from both Yandex and Baidu. They are always in the form of: ?type=GAME&id=SomeScenarioID None of my users are triggering these errors, so I am (sort of) confident that this is not due to an HTML template error somewhere on my part. There is also no HTTP_REFER showing up in the $_SERVER array, so I'm guessing these are direct requests from bad dbase data on their part. I see two options for dealing with these bad requests, and would like to know which is recommended... or if there are other, better options I have not thought of: simply 404 the request, since it is incorrect redirect the request to ?type=SCENARIO&id=SomeScenarioID because the scenario IDs are always valid, the breakage is due to asking for the wrong type.

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • Can non-IT people handle a wiki?

    - by Andrew Heath
    (I'm hoping that some of you will have encountered this issue before and can offer some insights...) My company is looking to improve their market research data management. Current data management style: "Hey Jimbo, where's that picture of our WhatZit 2.0? "yeah I remember that email about that company from that guy, gimme a few minutes to search my Outlook" "who has the newest copy of the Important Competitor's product catalogue? Mine is from '09." ... "Colleen does, and she's on maternity leave. You'll have to call her to get her workstation password..." Desired data management style: data organized neatly by topic (legal, economic, industrial, competitor) for each topic, multiple media types stored together (company product images, press releases, contact info) but still neatly sorted by type data editing histories communal access (no data silos) I was thinking about setting up a department wiki for all users to access. It seems to satisfy the four criteria above, but I'm a little concerned about how user-friendly (read: decipherable to non-technical people) it is for the more advanced features like image galleries, article formatting, and the like. Has anyone here setup a wiki for non-IT people and had it not catch on fire//become a ghost town//look like Geocities? Bonus question: can you see any obvious drawbacks to my choice of MediaWiki (or any other wiki) for solving this problem? Thank you.

    Read the article

  • Blurry printed raster images with Brother MFC-8840D

    - by Adam Monsen
    (NOTE: crossposted here: ubuntuforums.org/showthread.php?t=1621795) I've got a Brother MFC-8840D. Works great with Ubuntu server! Setting up a CUPS print server was pretty straightforward, and I also finally got network scanning working reliably with saned. Printing documents and Web pages works well: fonts are crisp/clear, etc. One issue has got me completely vexed: printing raster (ie: JPG) images. They are blurry. For example, I can scan a page of black and white text at 150 or 300 dpi. The grayscale image looks perfect on my monitor. But the printed version is much blurrier than the original, regardless of the "print resolution" dpi I choose. As a counterexample, if I use the "copy" function of the MFC-8840D, the copy looks excellent, and this function is much, much faster than if I scan then print a scan of same. I've googled around a bunch and tried different tricks (printing a PDF with the image from evince, printing with Gimp, EOG and other applications) but I just can't print anything that looks as good as a copy made with the MFC-8840D. Any ideas? I'm using Ubuntu 10.04.1 LTS server. I'm using the PPD file from solutions.brother.com. Thanks, -Adam

    Read the article

  • Recommendations for finding part-time consultancy work

    - by Mark Heath
    Although I have a full-time development job, I have occasionally done some part-time paid work in evenings / weekends for various people who have contacted me as a result of open-source projects I have worked on. It's a nice way to earn a bit of extra cash, but obviously it is not always available. My question is, what is a good way of getting your name out there to do some small projects? I have seen a few programmers-for-hire type websites, but I don't know which I can trust or whether there are too many people willing to work for very low prices. Also, being UK based, I would want something which did not assume I have a US bank account.

    Read the article

  • Best Practices for serializing/persisting String Object Dictionary entities

    - by Mark Heath
    I'm noticing a trend towards using a dictionary of string to object (or sometimes string to string), instead of strongly typed objects. For example, the new Katana project makes heavy use of IDictionary<string,object>. This approach avoids the need to continually update your entity classes/DTOs and the database tables that persist them with new properties. It also avoids the need to create new derived entity types to support new types of entity, since the Dictionary is flexible enough to store any arbitrary properties. Here's a contrived example: class StorageDevice { public int Id { get; set; } public string Name { get; set; } } class NetworkShare : StorageDevice { public string Path { get; set; } public string LoginName { get; set; } public string Password { get; set; } } class CloudStorage : StorageDevice { public string ServerUri { get; set } public string ContainerName { get; set; } public int PortNumber { get; set; } public Guid ApiKey { get; set; } } versus: class StorageDevice { public IDictionary<string, object> Properties { get; set; } } Basically I'm on the lookout for any talks, books or articles on this approach, so I can pick up on any best practices / difficulties to avoid. Here's my main questions: Does this approach have a name? (only thing I've heard used so far is "self-describing objects") What are the best practices for persisting these dictionaries into a relational database? Especially the challenges of deserializing them successfully with strongly typed languages like C#. Does it change anything if some of the objects in the dictionary are themselves lists of strongly typed entities? Should a second dictionary be used if you want to temporarily store objects that are not to be persisted/serialized across a network, or should you use some kind of namespacing on the keys to indicate this?

    Read the article

  • Ubuntu + Raid on HP proliant DL 160 G6

    - by Adam Matan
    Hi, I'm, trying to install Ubuntu 8.04 Server on an HP Proliant DL160 G6. The HP hardware is certified by Ubuntu for the 9.04 version, which I can't install due to company policy. The problem is that Ubuntu would not recognize the RAID 1+0 disk configured by the BIOS. The raid creates one ~470GB disk from two 500GB physical disks. Any ideas? Adam

    Read the article

  • webcam streaming on mobile phone

    - by adam
    hey guys im stuck on a project i have i need to find a way to stream my home webcam onto my mobile phone, in this case on a blackberry bold 9700, also i have an i pod touch 2.0. the problem i keep coming across is that every program i find askes for money, i am looking for a way to do this for free, ive been trying using vlc but have so far been unable to get the stream to work, any help would be great thanks a lot adam

    Read the article

  • MySQL: Load database to memory

    - by Adam Matan
    Hi, Is there a way to load an entire MySQL database to the RAM, especially on en EC2 server? The database is quite small (~500 MegaBytes) I have enough memory Speed issues are crucial - the resulted queries are used to serve a dynamic webpage. Thanks, Adam

    Read the article

  • Monitor utilities

    - by Adam Davis
    I'm a big fan of good monitor usage, but only currently use a few utilities to help me attain display nirvana across several systems and monitors. Part of this is due to not knowing what's available. Please list one monitor utility that you use and what it does for you per answer, and avoid duplicates - comment on and vote up the existing answers rather than adding a duplicate. Also, if there are existing questions that delve more specifically into one area of monitor utilities link that in a separate answer. -Adam

    Read the article

  • Internal Server Error issue with Codeigniter. Cant open linked page

    - by Adam Perlis
    Hi I am VERY new to backend development. I am trying to setup a User Registration using Codeigniter. I have gotten pretty far but for some reason I can get the "Create Account button working. I have my site up and running on t-mrkt.com/mycisite would you guys mind taking a look and seeing where I might be going wrong? Here is a link to my files: https://www.dropbox.com/sh/z54j4qd50yf3t0p/MnnghGBB_D Thanks for your help! Adam

    Read the article

  • Random password generator: many, in columns, on command line, in Linux

    - by Adam Backstrom
    A while back, I came across a random password generator for the command line that displayed a grid of "memorable" passwords. Output was something like this: adam@host:~$ CantRememberThisCommand lkajsdf aksjdfl kqwrupo qwerpoi qwerklw zxlkelq The idea was that you could run this utility while someone was looking over your shoulder, and still pick a password with some level of secrecy due to the large number of choices. I cannot remember what this utility was called. Oh interwebs, can you help?

    Read the article

  • New version of SQL Server Data Tools is now available

    - by jamiet
    If you don’t follow the SQL Server Data Tools (SSDT) blog then you may not know that two days ago an updated version of SSDT was released (and by SSDT I mean the database projects, not the SSIS/SSRS/SSAS stuff) along with a new version of the SSDT Power Tools. This release incorporates a an updated version of the SQL Server Data Tier Application Framework (aka DAC Framework, aka DacFX) which you can read about on Adam Mahood’s blog post SQL Server Data-Tier Application Framework (September 2012) Available. DacFX is essentially all the gubbins that you need to extract and publish .dacpacs and according to Adam’s post it incorporates a new feature that I think is very interesting indeed: Extract DACPAC with data – Creates a database snapshot file (.dacpac) from a live SQL Server or Windows Azure SQL Database that contains data from user tables in addition to the database schema. These packages can be published to a new or existing SQL Server or Windows Azure SQL Database using the SqlPackage.exe Publish action. Data contained in package replaces the existing data in the target database. In short, .dacpacs can now include data as well as schema. I’m very excited about this because one of my long-standing complaints about SSDT (and its many forebears) is that whilst it has great support for declarative development of schema it does not provide anything similar for data – if you want to deploy data from your SSDT projects then you have to write Post-Deployment MERGE scripts. This new feature for .dacpacs does not change that situation yet however it is a very important pre-requisite so I am hoping that a feature to provide declaration of data (in addition to declaration of schema which we have today) is going to light up in SSDT in the not too distant future. Read more about the latest SSDT, Power Tools & DacFX releases at: Now available: SQL Server Data Tools - September 2012 update! by Janet Yeilding New SSDT Power Tools! Now for both Visual Studio 2010 and Visual Studio 2012 by Sarah McDevitt SQL Server Data-Tier Application Framework (September 2012) Available by Adam Mahood @Jamiet

    Read the article

  • Oracle Developer Day OOP 2013 – become a Java expert & get a free ticket

    - by JuergenKress
    Want to become a Java Expert? Want to learn more about Java Roadmap, Java EE, Java FX, Java Cloud, ADF mobile, Rest and big data and try it hands-on? Make sure you attend the Oracle Developer Day 2013 with Adam Bien, Markus Eisele, Torsten Winterberg, Guido Schmut,  Wolfgang Weigend and Peter Doschkinow! Thursday January 24th 2013 Munich Conference Center Agenda 9.00-9.30:        Java Überblick und Roadmap – Wolfgang Weigend 9.30-10.00:       Java FX  – Peter Doschkinow 10.00-10.30:       ADF Mobile - Torsten Winterberg 10.30-11.00:       Pause 11.00-11.45:       Java EE – Adam Bien 11.45.12.15:       Java Cloud – Markus Eisele 12.15-12.45:       Java, big data & service bus & twitter– Guido Schmutz 12.45-14.30:       Mittag 14.30-16.30:       Hans-on workshops (parallel) Java FX Hands On ADF Mobile Glassfish Website with detail and Agenda Free registration for Exhibition and Oracle Developer Day For more information about Java please visit www.oracle.com/java WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: OOP 2013,Oracle Developer Day,OOP Oracle,Adam Bien,Markus Eisele,Guido Schmutz,Torsten Winterberg,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • APress Deal of the Day - 1/June/2012 - Introducing Visual C# 2010

    - by TATWORTH
    Today's $10 Deal of the Day from APress at http://www.apress.com/9781430231714 is Introducing Visual C# 2010."If you're new to C# programming, this book is the ideal way to get started. Respected author Adam Freeman guides you through the C# language by carefully building up your knowledge from fundamental concepts to advanced features." Adam Freeman is an excellent author. This is an excellent introduction to C# programming and a manual for those with experience. Having read through book, I am very impressed by its practical approach to C#. I cannot improve on the by-line "Get started on your C# journey with an expert by your side leading by example" Adam Freeman teaches C# by precept and example. I suspect he drives a Volvo C30 as it comes up in many of the code examples! Throughout the book there are numerous links back and forth so as to avoid over complicating the current topic. I have have no hesitation in recommending this book both to programmers starting out with C# and to the seasoned professional. It is a book that should be on every C# development team's book shelf.

    Read the article

  • Machine only responds to network requests from machines it is pinging

    - by ILikeFood
    I have two machines. WOPR: Ubuntu server edition 10.10 LTS 32 bit Adam Selene: Windows 7 home premium 64 bit / Ubuntu Desktop 10.10 LTS 64 bit I want to be able to SSH from Adam Selene to WOPR, so I connect them to the same network. Here's where things get weird. I cannot connect to WOPR in any way under normal circumstances. But, if WOPR is pinging Adam, then it starts responding to ping requests, HTTP gets, and SSH tunnels. I'm an amateur, and brand new to Ubuntu server, so I suspect there's a misconfiguration somewhere, but there's an off chance it's a bug in the OS. Does anyone know what might cause this behavior? Thanks a lot!

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >