Search Results

Search found 62705 results on 2509 pages for 'sql calc found rows'.

Page 208/2509 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • SSIS Prehistory video

    - by jamiet
    I’m currently wasting spending my Easter bank holiday putting together my presentation SSIS Dataflow Performance Tuning for the upcoming SQL Bits conference in London and in doing so I’m researching some old material about how the dataflow actually works. Boring as it is I’ve gotten easily sidelined and have chanced upon an old video on Channel 9 entitled Euan Garden - Tour of SQL Server Team (part I). Euan is a former member of the SQL Server team and in this series of videos he walks the halls of the SQL Server building on Microsoft’s Redmond campus talking to some of the various protagonists and in this one he happens upon the SQL Server Integration Services team. The video was shot in 2004 so this is a fascinating (to me anyway) glimpse into the development of SSIS from before it was ever shipped and if you’re a geek like me you’ll really enjoy this behind-the-scenes look into how and why the product was architected. The video is also notable for the presence of the cameraman – none other than the now-rather-more-famous-than-he-was-then Robert Scoble. See it at http://channel9.msdn.com/posts/TheChannel9Team/Euan-Garden-Tour-of-SQL-Server-Team-part-I/ Enjoy! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQL Server on Linux

    - by TimothyAWiseman
    For a particular project that is coming up, I am trying to expand my knowledge of Linux, so I am going to set up a Linux system at home. Rather than dual booting, I am thinking about putting SQL Server on a Windows Virtual Machine with Linux as the host at least until this project is over when I will probably switch back to Linux. So, I have a couple of different, but interrelated questions: How well does this work? This is only a test machine at home, so I can easily accept a fair bit of degradation, but if it is going to be a horrible reduction in performance I will dual boot instead. Is there a particular virtual machine manager I should look at to go this route? Since this is my personal machine, price is an issue but I am quite happy to pay a reasonable amount. And finally, given the choice of VMM, is there a particular Linux Distro I should be looking at? [This has been cross posted at Ask.SqlServerCentral.com . I think it may be appropriate at both sites. ]

    Read the article

  • SQL Server Backup File Significantly Smaller After Table Recreation

    - by userx
    We run automated weekly backups of our SQL Server. The database in question is configured for Simple Recovery. We backup using Full, not differential. Recently, we had to re-create one of our tables with data in it (making 2 varchar fields a couple characters longer). This required running a script which created a new table, copied the data over, and then dropped the old. This worked correctly. Oddly though, our weekly backup files now SHRANK by over 75%! The tables don't have large indexes. All data was copied over correctly (and verified). I've verified that we are doing full and not incremental backups. The new files restore just fine. I can't seem to figure out why the backup files would have shrank so much? I've also noticed that they get about 10 MB larger every week, even though less than that amount of data is being added. I'm guessing that I'm simply not understanding something. Any insight would be appreciated.

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Finding rows that intersect with a date period

    - by DavidWimbush
    This one is mainly a personal reminder but I hope it helps somebody else too. Let's say you have a table that covers something like currency exchange rates with columns for the start and end dates of the period each rate was applicable. Now you need to list the rates that applied during the year 2009. For some reason this always fazes me and I have to work it out with a diagram. So here's the recipe so I never have to do that again: select  * from    ExchangeRate where   StartDate < '31-DEC-2009'         and EndDate > '01-JAN-2009'   That is all!

    Read the article

  • Writing a SQL Azure Book - Notes

    - by Herve Roggero
    Over the last few months I have had the opportunity to ramp up significantly on SQL Azure.  In fact I will be the co-author of Pro SQL Azure, published by Apress. This is going to be a book on how to best leverage SQL Azure, both from a technology and design standpoint. Talking about design, one of the things I realized is that understanding the key limitations and boundary parameters of Azure in general, and more specifically SQL Azure, will play an important role in making sounds design decisions that both meet reasonable performance requirements and minimize the costs associated with running a cloud computing solution.   The book touches on many design considerations including link encryption, pricing model, design patterns, and also some important performance techniques that need to be leveraged when developing in Azure, including Caching, Lazy Properties and more.   Finally I started working with Shards and how to implement them in Azure to ensure database scalability beyond the current size limitations. Implementing shards is not simple, and the book will address how to create a shard technology within your code to provide a scale-out mechanism for your SQL Azure databases.   As you can see, there are many factors to consider when designing a SQL Azure database. While we can think of SQL Azure as a cloud version of SQL Server, it is best to look at it as a new platform to make sure you don’t make any assumptions on how to best leverage it.

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • How to detach a sql server 2008 database that is not in database list?

    - by Amir
    I installed SQL Server 2008 on Windows 7. Then I created a database. After 2 days I reinstalled Windows and SQL Server. Now I am trying to attach my database file, but I have encountered the error below. I think that the files are like an attached file and I can't attach them. What is difference between an attached file and a non-attached file? How can I attach this file? Please Help Me. Error Text: TITLE: Microsoft SQL Server Management Studio Attach database failed for Server 'AMIR-PC'. (Microsoft.SqlServer.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1+((KJ_RTM).100402-1540+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Attach+database+Server&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "F:\Company.mdf". Operating system error 5: "5(Access is denied.)". (Microsoft SQL Server, Error: 5120) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=5120&LinkId=20476

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found

    To associate an assembly with a strong key file to store it to GAC, we use should include following line after all the imports and before defing namespace. For VB.NET:  <Assembly: AssemblyKeyFile("c:\path\mykey.snk")> For C#:    [assembly: AssemblyKeyFile(@"c:\path\mykey.snk")] but, you might encounter following two errors at the time of creating Assembly for GAC. 1. The type or namespace name 'AssemblyKeyFileAttribute' could not be found (are you missing a using directive or an assembly reference?) 2. The type or namespace name 'AssemblyKeyFile' could not be found (are you missing a using directive or an assembly reference?) How to resolve these errors: Just include "System.Reflection" namespace. It resolve above two errors. span.fullpost {display:none;}

    Read the article

  • Fresh Install of SQL Server 2008 doesn't install managment studio. Help!

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do?

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Funny statements, quotes, phrases, errors found on technical Books [closed]

    - by Felipe Fiali
    I found some funny or redundant statements on technical books I've read, I'd like to share. And I mean good, serious, technical books. Ok so starting it all: The .NET framework doesn't support teleportation From MCTS 70-536 Training kit book - .NET Framework 2.0 Application Development Foundation Teleportation in science fiction is a good example of serialization (though teleportation is not currently supported byt the .NET Framework). C# 3 is sexy From Jon Skeet's C# in Depth second Edition You may be itching to get on to the sexy stuff from C# 3 by this point, and I don’t blame you. Instantiating a class From Introduction to development II in Microsoft Dynamics AX 2009 To instantiate a class, is to create a new instance of it. Continue or break From Introduction to development II in Microsoft Dynamics AX 2009 Continue and break commands are used within all three loops to tell the execution to break or continue. These are just a few. I'll post some more later. Share some that you might have found too.

    Read the article

  • PHP ORM style of querying

    - by Petah
    Ok so I have made an ORM library for PHP. It uses syntax like so: *(assume that $business_locations is an array)* Business::type(Business:TYPE_AUTOMOTIVE)-> size(Business::SIZE_SMALL)-> left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, Business::id())-> left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id())-> where(Business::location_id(), SQL::in($business_locations))-> group_by(Business::id())-> select(SQL::count(BusinessOwner::id()); Which can also be represented as: $query = new Business(); $query->set_type(Business:TYPE_AUTOMOTIVE); $query->set_size(Business::SIZE_SMALL); $query->left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, $query->id()); $query->left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id()); $query->where(Business::location_id(), SQL::in($business_locations)); $query->group_by(Business::id()); $query->select(SQL::count(BusinessOwner::id()); This would produce a query like: SELECT COUNT(`business_owners`.`id`) FROM `businesses` LEFT JOIN `business_owners` ON `business_owners`.`business_id` = `businesses`.`id` LEFT JOIN `owners` ON `owners`.`id` = `business_owners`.`owner_id` WHERE `businesses`.`type` = 'automotive' AND `businesses`.`size` = 'small' AND `businesses`.`location_id` IN ( 1, 2, 3, 4 ) GROUP BY `businesses`.`id` Please keep in mind that the syntax might not be prefectly correct (I only wrote this off the top of my head) Any way, what do you think of this style of querying? Is the first method or second better/clearer/cleaner/etc? What would you do to improve it?

    Read the article

  • Data from a table in 1 DB needed for filter in different DB...

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID]

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • SQL SERVER 2005 with Windows 7 Problems

    - by azamsharp
    First of all I restored the database from other server and now all the stored procedures are named as [azamsharp].[usp_getlatestposts]. I think [azamsharp] is prefixed since it was the user on the original server. Now, on my local machine this does not run. I don't want the [azamsharp] prefix with all the stored procedures. Also, when I right click on the Sproc I cannot even see the properties option. I am running the SQL SERVER 2005 on Windows 7. UPDATE: The weird thing is that if I access the production database from my machine I can see the properties option. So, there is really something wrong with Windows 7 security. UPDATE 2: When I ran the orphan users stored procedure it showed two users "azamsharp" and "dbo1". I fixed the "azamsharp" user but "dbo1" is not getting fixed. When I run the following script: exec sp_change_users_login 'update_one', 'dbo1', 'dbo1' I get the following error: Msg 15291, Level 16, State 1, Procedure sp_change_users_login, Line 131 Terminating this procedure. The Login name 'dbo1' is absent or invalid.

    Read the article

  • GLIBC_2.8 not found

    - by Thomas Nilsson
    As a newbie I seem to have messed up my upgrade leaving my system in a very unstable state. I attempted an upgrade from 8.04LTS which ended in an error about libc and kernel upgrades. I tried to upgrade the kernel but am now unsure if that worked, because when I retried my dist-upgrade there was a lot of errors about pre-dependencies and leaving packages un-configured. Now I have a system that answers almost every command with: /lib/libc.so.6: version `GLIBC_2.8' not found (required by /lib/libselinux.so.1) I probably should try a complete re-installation, but I'm investigating if there is any possibility of getting a working glibc so that I at least can have some commands working to ensure that my backups are recent etc. before doing the clean install. not even 'ls' works without saying "glibc_2.8 not found".

    Read the article

  • Cloning a NAS drive which hosts a SQL Server DB

    - by Adrian Hand
    We have a system in the field running a server application which is suffering with major performance issues. The system in question has 2 onboard 300gb sas drives in RAID 5 from which it boots Windows Server 2003, and a 6tb buffalo terastation NAS unit (also RAID 5) to which the server app does all of its reading and writing. I believe the terastation is the source of all our woes. Whilst under load, reads and writes tick by at something of the order of 1meg/sec, though the network in question is hardly utilised. The terastation contains various data, but crucially hosts a full instance worth of SQL Server .mdf and .ldf files (master etc - the whole shooting match) I wish to stop all the services on the server, then take everything on the terastation and essentially clone it to some alternative onboard storage, so as to eliminate the terastation from the equation as far as poor performance is concerned. ie the terastation is currently drive D: - I want to copy everything off and then have the duplicate assume the drive letter so that as far as the software is aware, nothing is different. This is tricky because of the mdf and ldf files - everything else will work with a straight up file copy. Can anyone suggest a means to achieve what I am describing? Many thanks!

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • SQL Server DBA - How to get a good one!

    - by ETFairfax
    I'm a lone developer. I am currently developing an application which is seeing me get way way way out of my depth when it comes to SQL DBA'ing, and have come to realise that I should hire a DBA to help me (which has full support from the company). Problem is - who? This SO thread sees someone hire a DBA only to realise that they will probably cause more harm then good! Also, I have just had a bad experience with a ASP.NET/C# contractor that has let us down. So, can anyone out there on SO either... a) Offer their services. b) Forward me onto someone that could help. c) Give some tips on vetting a DBA. I know this isn't a recruitment site, so maybe some good answers for c) would be a benefit for other readers!! BTW: The database is SQL Server 2008. I'm running into performance issues (mainly timeouts) which I think would be sorted out by some proper indexing. I would also need the DBA to provide some sort of maintenance plan, and to review how our database will deal what we intend at throwing at it in the future!

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >