Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 97/1132 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • Azure VM with many IPs or SSL certificates

    - by timmah.faase
    I am looking to move our hosting environment to Azure and by doing so have created a sandpit VM to figure things out. We host around 300-400 websites in IIS and about 2% of these sites have unique, non wildcard certificates all requiring a unique public IP in our current setup. Can you get a range of IPs pointing to 1 VM/Endpoint? Or is it possible to create an SSL proxy? I've never created an SSL proxy but like the idea of it. I'd need advise here on how to proceed if this is the best option. Sorry if this has been answered! Sorry also if my question isn't worded eloquently.

    Read the article

  • MS SQL datetime precision problem

    - by Nailuj
    I have a situation where two persons might work on the same order (stored in an MS SQL database) from two different computers. To prevent data loss in the case where one would save his copy of the order first, and then a little later the second would save his copy and overwrite the first, I've added a check against the lastSaved field (datetime) before saving. The code looks roughly like this: private bool orderIsChangedByOtherUser(Order localOrderCopy) { // Look up fresh version of the order from the DB Order databaseOrder = orderService.GetByOrderId(localOrderCopy.Id); if (databaseOrder != null && databaseOrder.LastSaved > localOrderCopy.LastSaved) { return true; } else { return false; } } This works for most of the time, but I have found one small bug. If orderIsChangedByOtherUser returns false, the local copy will have its lastSaved updated to the current time and then be persisted to the database. The value of lastSaved in the local copy and the DB should now be the same. However, if orderIsChangedByOtherUser is run again, it sometimes returns true even though no other user has made changes to the DB. When debugging in Visual Studio, databaseOrder.LastSaved and localOrderCopy.LastSaved appear to have the same value, but when looking closer they some times differ by a few milliseconds. I found this article with a short notice on the millisecond precision for datetime in SQL: Another problem is that SQL Server stores DATETIME with a precision of 3.33 milliseconds (0. 00333 seconds). The solution I could think of for this problem, is to compare the two datetimes and consider them equal if they differ by less than say 10 milliseconds. My question to you is then: are there any better/safer ways to compare two datetime values in MS SQL to see if they are exactly the same?

    Read the article

  • Does Azure only support ASP.NET MVC applications and if so how should I adapt my design?

    - by RPK
    I am writing a small ASP.NET Web Application. My worries are that I want to keep the architecture same giving me the option to install it on an Intranet or on a Cloud Platform. I am not using MVC but lately learned that Azure only supports ASP.NET MVC applications. I want to know whether ASP.NET Web Forms application work on Azure/AppHarbor or not. Do I need to convert this application to MVC if Web Forms is not supported? Will the same application run on Intranet as well?

    Read the article

  • Instead of trigger in SQL Server - looses SCOPE_IDENTITY?

    - by kastermester
    Hey StackOverflow, I am (once again) having some issues with some SQL. I have a table, on which I have created an INSTEAD OF trigger to enforce some buissness rules (rules not really important). This works as intended. My issue is, that now when inserting data into this table, SCOPE_IDENTITY() now returns a NULL value, rather than the actual inserted identity, my guess is that this is because it is now out of scope - but then how do I get this in scope? I am using SQL Server 2008. Per request, here's the SQL: Insert + Scope code INSERT INTO [dbo].[Payment]([DateFrom], [DateTo], [CustomerId], [AdminId]) VALUES ('2009-01-20', '2009-01-31', 6, 1) SELECT SCOPE_IDENTITY() Trigger: CREATE TRIGGER [dbo].[TR_Payments_Insert] ON [dbo].[Payment] INSTEAD OF INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; IF NOT EXISTS(SELECT 1 FROM dbo.Payment p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo) ) AND NOT EXISTS (SELECT 1 FROM Inserted p INNER JOIN Inserted i ON p.CustomerId = i.CustomerId WHERE (i.DateFrom <> p.DateFrom AND i.DateTo <> p.DateTo) AND ((i.DateFrom >= p.DateFrom AND i.DateFrom <= p.DateTo) OR (i.DateTo >= p.DateFrom AND i.DateTo <= p.DateTo)) ) BEGIN INSERT INTO dbo.Payment (DateFrom, DateTo, CustomerId, AdminId) SELECT DateFrom, DateTo, CustomerId, AdminId FROM Inserted END ELSE BEGIN ROLLBACK TRANSACTION END END The code did work before the creation of this trigger, also I am using LINQ to SQL in C# and as far as I can see, I have no way of changing SCOPE_IDENTITY to @@IDENITY - is there really no way out of this one?

    Read the article

  • Slides and links from Cloud Computing Congress session on Windows Azure Platform

    - by Eric Nelson
    On Tuesday (16th March 2010) I presented on Azure to a none technical audience at the Cloud Computing Congress. Great audience, lots of folks, lots of questions during and after – although it did feel odd to do a session with no code :-) Lots of people asked me for my slide deck – which is a 30minute none technical overview. I will get it on my slideshare.net (which is being temperamental) but in the meantime I have hosted it on skydrive. or download link. Related Links: Steve Ballmer on Cloud Computing – We’re all in UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure TCO and ROI calculator for Windows Azure

    Read the article

  • do I have both sql 05 and 08 installed?

    - by Blankman
    When in 'sql server configuration manager' I see, under 'sql server services', 2 items that look like sql server's: sql server (sqlexpress) sql server (mssqlserver) Does that mean I have 2 versions installed at the same time? The 'sql server (mssqlserver) is currently stopped).

    Read the article

  • Live examples of the Windows Azure Platform running Java and Ruby on Rails

    - by Eric Nelson
    At QCon in March we had a booth focused on interoperability out of which came the idea to create an application implemented in both Java and Ruby on Rails, running on top of the Windows Azure Platform. Nothing fancy, just an application to capture attendees view on Microsoft and Interoperability. It was implemented by Active Web Solutions, long time fans of Azure. Wroth a quick squint :-) Check out the related links below for info to get you up and running. Check out The Java version http://ukinterop.cloudapp.net/  The Ruby on Rails version http://rubyukinterop.cloudapp.net/ (run out of time to finish this one) Related Links: Tomcat Solution Accelerator http://code.msdn.microsoft.com/winazuretomcat AzureRunMe http://azurerunme.codeplex.com/ UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • How to set minimum SQL Server resource allocation for a database?

    - by Jeff Widmer
    Over the past Christmas holiday week, when the website I work on was experiencing very low traffic, we saw several Request timed out exceptions (one on each day 12/26, 12/28, 12/29, and 12/30) on several pages that require user authentication. We rarely saw Request timed out exceptions prior to this very low traffic week. We believe the timeouts were due to the database that it uses being "spun down" on the SQL Server and taking longer to spin up when a request came in. There are 2 databases on the SQL Server (SQL Server 2005), one which is specifically for this application and the other for the public facing website and for authentication; so in the case where users were not logged into the application (which definitely could have been for several hours at a time over Christmas week) the application database probably received no requests. We think at this point SQL Server reallocated resources to the other database and then when a request came in, extra time was needed to spin up the application database and the timeout occurred. Is there a way to tell SQL Server to give a minimum amount of resources to a database at all times?

    Read the article

  • SQL Server lock/hang issue

    - by mattwoberts
    Hi, I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up. I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD. Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue. ALTER PROCEDURE [dbo].[MostPopularRead] AS BEGIN SET NOCOUNT ON; SELECT c.ForeignId , ct.ContentSource as ContentSource , sum(ch.HitCount * hw.Weight) as Popularity , (sum(ch.HitCount * hw.Weight) * 100) / @Total as Percent , @Total as TotalHits from ContentHit ch WITH (NOLOCK) join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId where ch.CreatedDate between @Then and @Now group by c.ForeignId , ct.ContentSource order by sum(ch.HitCount * hw.HitWeightMultiplier) desc END The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert: BEGIN TRAN insert into [ContentHit] (ContentId, HitCount, HitWeightId, ContentHitComment) values (@ContentId, isnull(@HitCount,1), isnull(@HitWeightId,1), @ContentHitComment) COMMIT TRAN The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select. When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs. Can someone please advise how best I can solve this problem? I don't care about reading dirty data... Thanks

    Read the article

  • Windows Azure Tools for Microsoft Visual Studio 1.2 (June 2010)

    - by Jim Duffy
    I have good news! Microsoft has released the June 2010 Windows Azure Tools + SDK. These new tools extend Visual Studio (both VS 2010 & VS 2008) for Windows Azure development. With these tools you can create, configure, debug, build, run, and deploy scalable web apps on Windows Azure. At first glance what I see as some of the most interesting points of interest are the fact that Visual Studio 2010 RTM is fully supported as well as .NET 4 support. You can choose to build your apps with the .NET 3.5 or .NET 4 frameworks. Another area of interest that I’ll be digging into is the cloud storage explorer. It provides a read-only view of your Windows Azure tables and blob containers from within Visual Studio via the Server Explorer. I’m sure I’ll have more to say about the Windows Azure Tools as I dig deeper… Have a day. :-|

    Read the article

  • How can you connect to a SQL Server not on your domain?

    - by scotty2012
    I have a test machine that's not allowed on our domain because we are testing corporately unsupported applications (SQL 2008 and Server 2008). I want to use management studio to connect to the SQL2008 server but can't get it working. I have authentication set to mixed-mode, I've checked 'allow remote connections to this server', but when I try to access it, I get the error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53) Since it says the provider is Named Pipes, I enabled Named Pipes on the server, but still no dice. I've tried connecting to the system name, the IP, the system name\instance and IP\instance, all to no avail. Is what I'm trying to do not possible? Edit: Well, through some basic troubleshooting, I've found that I can't ping the server from my client computer, but I can ping the client computer from the server? They are both plugged into the same switch, and are sitting next to each other. The windows firewall on the server is turned on, is there some specific settings I need to enable? DAH! So it was the firewall blocking me. How can I enable the firewall and still connect?

    Read the article

  • configure time sync for azure VMs

    - by Pharao2k
    I have several ExtraSmall-sized Azure VMs (PaaS / Cloud Service based) that are all experiencing drifting of the Windows clock. Research showed that this is quite common, especially in VMs with shared cores. Unfortunately even after configuring the w32time service to sync with time.windows.com and forcing a resync (w32tm /resync), there seems to be a time difference of 2 seconds to the configured NTP server. Though Microsoft states that w32tm is not meant as a high-precision sync tool, a difference of 2 seconds is (IMO) quite a lot for server-activities/processing. What does one have to do to get more accurate time sync?

    Read the article

  • How do I login to SQL Server without having to use "Run as Administrator" when starting Management S

    - by MedicineMan
    When I start Management Studio, unless I use the "Run as Administrator" selection, I cannot login to my local SQL Server. Is this normal? I am a normal developer and don't believe I have a need for high security on my local machine. I'm running SQL Server 2008, Windows 7. The error I get is: Cannot connect to (local) Additional Information Login failed for user 'MYCOMPUTER\MyName'. (Microsoft SQL Server, Error: 18456)

    Read the article

  • Remote SQL server connection failure

    - by Sevki
    I am trying to connect to my MSSQL server 2008 web instance and im failing horribly... i get the error 26 and before you jump on me i have done these Check the spelling of the SQL Server instance name that is specified in the connection string. Use the SQL Server Surface Area Configuration tool to enable SQL Server to accept remote connections over the TCP or named pipes protocols. For more information about the SQL Server Surface Area Configuration Tool, see Surface Area Configuration for Services and Connections. Make sure that you have configured the firewall on the server instance of SQL Server to open ports for SQL Server and the SQL Server Browser port (UDP 1434). Make sure that the SQL Server Browser service is started on the server. in addition to theese i have disabled the firewall completely and tried other ports nothing works the same credentials work on the server but not on the client. this is the exact error message A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider) Can anybody help?

    Read the article

  • Predefined column names in SQL Server pivot table

    - by Marcos Buarque
    Hi, the other day I opened a topic here in StackOverflow (stackoverflow.com/questions/4663698/how-can-i-display-a-consolidated-version-of-my-sql-server-table). At that time I needed help on how to show data on a pivot table. From the help I got here in the forum, my research led me to this page about dynamic SQL: www.sommarskog.se/dynamic_sql.html. And then it led me to this awesome SQL script by Itzik Ben-Gan that will create a stored procedure that outputs a pivot table exactly the way I want: sommarskog.se/pivot_sp.sp. Well, almost. I need one change in this stored procedure. Instead of having dynamic column names pulled from the @on_cols variable in the SPROC, I need the output table to hold generic column names in simple ASC order. Could be, for example, col1, col2, col3, col4 ... The dynamic column names are a problem for me. So I need them named by their index in the order they appear. I have tried all sorts of things changing this great SQL script, but it won't work. I did not paste the code from the author because it is too long, but the link above will get us there. Any help appreciated. Thank you very much

    Read the article

  • Getting Windows Azure SDK 1.1 To Talk To A Local DB

    - by Richard Jones
    Just found this, if you’re using Azure 1.1,  which you probably will be if yo'u’ve moved to Visual Studio 2010. To change the default database to something other than sqlexpress for Development Storage do this - Look at this - http://msdn.microsoft.com/en-us/library/dd203058.aspx At the bottom it states -   Using Development Storage with SQL Server Express 2008 By default the local Windows Group BUILTIN\Administrator is not included in the SQL Server sysadmin server role on new SQL Server Express 2008 installations.  Add yourself to the sysadmin role in order to use the Development Storage Services on SQL Server Express 2008.  See SQL Server 2008 Security Changes for more information. Changing the SQL Server instance used by Development Storage By default, the Development Storage will use the SQL Express instance.  This can be changed by calling “DSInit.exe /sqlinstance:<SQL Server instance>” from the Windows Azure SDK command prompt.

    Read the article

  • Migrating an Existing ASP.NET App to run on Windows Azure

    - by kaleidoscope
    Converting Existing ASP.NET application in to Windows Azure Method 1:  1. Add a Windows Azure Cloud Service to the existing solution                         2. Add WebRole Project in solution and select Existing Asp.Net project in it. Method 2: The other option would have been to create a new Cloud Service project and add the ASP.NET project (which you want to deploy on Azure) to it using -                    1. Solution | Add | Existing Project                    2. Add | Web Role Project in solution Converting Sql server database to SQL Azure - Step 1: Migrate the <Existing Application>.MDF Step 2: Migrate the ASP.NET providers Step 3: Change the connection strings More details can be found at http://blogs.msdn.com/jnak/archive/2010/02/08/migrating-an-existing-asp-net-app-to-run-on-windows-azure.aspx   Ritesh, D

    Read the article

  • Windows SQL wildcards and ASP.net parameters

    - by Vinzcent
    Hey In my SQL statement I use wildcards. But when I try to select something, it never select something. While when I execute the querry in Microsoft SQL Studio, it works fine. What am I doing wrong? Click handler protected void btnTitelAuteur_Click(object sender, EventArgs e) { cvalTitelAuteur.Enabled = true; cvalTitelAuteur.Validate(); if (Page.IsValid) { objdsSelectedBooks.SelectMethod = "getBooksByTitleAuthor"; objdsSelectedBooks.SelectParameters.Clear(); objdsSelectedBooks.SelectParameters.Add(new Parameter("title", DbType.String)); objdsSelectedBooks.SelectParameters.Add(new Parameter("author", DbType.String)); objdsSelectedBooks.Select(); gvSelectedBooks.DataBind(); pnlZoeken.Visible = false; pnlKiezen.Visible = true; } } In my Data Acces Layer public static DataTable getBooksByTitleAuthor(string title, string author) { string sql = "SELECT 'AUTHOR' = tblAuthors.FIRSTNAME + ' ' + tblAuthors.LASTNAME, tblBooks.*, tblGenres.GENRE " + "FROM tblAuthors INNER JOIN tblBooks ON tblAuthors.AUTHOR_ID = tblBooks.AUTHOR_ID INNER JOIN tblGenres ON tblBooks.GENRE_ID = tblGenres.GENRE_ID " +"WHERE (tblBooks.TITLE LIKE '%@title%');"; SqlDataAdapter da = new SqlDataAdapter(sql, GetConnectionString()); da.SelectCommand.Parameters.Add("@title", SqlDbType.Text); da.SelectCommand.Parameters["@title"].Value = title; DataSet ds = new DataSet(); da.Fill(ds, "Books"); return ds.Tables["Books"]; } Thanks, Vincent

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • SQL Server 2005 Import from Excel

    - by user327045
    I'd like to know what my best option would be to import data from an excel file on a weekly or monthly basis. At first, I thought I would use SSIS, but after much struggle with seemingly simple tasks, I'm starting to rethink my plan. Would it be better/easier to just write the SQL by hand or use the services of an SSIS package? The basic process will be as follows: A separate process will download an .xls file to a local fileshare. The xls file will have a filename like: 'myfilename MON YY'. I will need to read the month and year from the the filename, reformat it to a sql date and then query a DimDate table to find the corresponding date key. For each row (after the first 2 header rows), insert the data with the date key, unless the row is a total row, then ignore. Here are some of the issues I've been encountering with SSIS: I can parse the date string from a flat file datasource, but can't seem to do it with an excel data source. Also, once parsed, i cannot seem to convert the string to a date in order to perform the lookup for the date key. For example, I want to do something like this: select DateKey from DimDate where ActualDate = convert(datetime, '01-' + 'JAN-10', 120) but i don't think it is possible to use the 'convert' or 'datetime' keywords in an expression builder. I have been also unable to find where I can edit the SQL to ignore the first 2 rows of data. I'm very skeptical of using SSIS because it seems like a Kludgy way of doing something that can probably be accomplished more efficiently writing the SQL yourself, but I may be forced to use SSIS. Thoughts?

    Read the article

  • Windows Azure openSUSE: rcnetwork not starting

    - by djechelon
    I have a Linux VM in Azure, created from their default image. My problem is simply that the init script network doesn't look like to start, so dependent services (apache, postfix...) won't start. If I run yast runlevel and try to start postfix it asks me to start network first: if I accept, network is started without errors and then postfix is started. While network is configured to start on boot, it just doesn't appear to have started. Anyway, SSH connections work fine. Currently, I had to edit my init scripts and remove network from the Required-Start list, but that didn't work for posftix (even after running systemctl --system daemon-reload). How can I fix all this?

    Read the article

  • How does azure memory usage work?

    - by Jed Grant
    I have a windows azure website. In the dashboard it shows me that I have used 1.51 GB of the 2GB available per hour. I keep increasing the number of instances available in the shared node so the site doesn't shut down. After each hour finishes, the memory usage still shows 1.51 GB used. I assume this would start at ZERO and then be used as time goes on, but that doesn't appear to be the case. How does server memory work? What are some reasons my application using this much memory? (I use no output caching and generally have just built off of the basic MVC templates provided in visual studio.) What other considerations should I be making to get the amount of memory needed to decrease?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >