Do database engines other than SQL Server behave this way?
- by Yishai
I have a stored procedure that goes something like this (pseudo code)
storedprocedure param1, param2, param3, param4
begin
if (param4 = 'Y')
begin
select * from SOME_VIEW order by somecolumn
end
else if (param1 is null)
begin
select * from SOME_VIEW
where (param2 is null or param2 = SOME_VIEW.Somecolumn2)
and (param3 is null or param3 = SOME_VIEW.SomeColumn3)
order by somecolumn
end
else
select somethingcompletelydifferent
end
All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this:
storedprocedure param1, param2, param3, param4
begin
if (param4 = 'Y')
begin
set param2 = null
set param3 = null
end
if (param1 is null)
begin
select * from SOME_VIEW
where (param2 is null or param2 = SOME_VIEW.Somecolumn2)
and (param3 is null or param3 = SOME_VIEW.SomeColumn3)
order by somecolumn
end
else
select somethingcompletelydifferent
And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that:
Queries that ran fine for two years just stop working as the data grows.
The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine?
This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.