Occasionally I run into this limitation using SQL Server 2000 that a row size can not exceed 8K bytes. SQL Server 2000 isn't really state of the art, but it's still in production code and because some tables are denormalized that's a problem.
However, this seems to be a non issue with SQL Server 2005. At least, it won't complain that row sizes are bigger than 8K, but what happens instead and why was this a problem in SQL Server 2000?
Do I need to care about my rows growing? Should I try and avoid large rows? Are varchar(max) and varbinary(max) a solution or expensive, in terms of size in database and/or CPU time?
Why do I care at all about specifying the length of a particular column, when it seems like it's just a matter of time before someones going to hit that upper limit?