Search Results

Search found 111053 results on 4443 pages for 'system data sqlclient sql'.

Page 38/4443 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • SQL Server 2008 Restore from Backup fails with error 3241 'cannot process this media family'

    - by pearcewg
    I am attempting to backup a database from a SQL Server instance on one machine and restore it to another, and I am encountering the frequently discovered 'SQL Server cannot process this media family' error. Each of my instances are SQL Server 2008, but with different patch levels Restore: 10.0.2531.0 Backup: 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) The restore DB is express. Not sure about the backup version. The backup version is on a virtual private server. The restore is on my development box. When I restore to a different database on the source (backup) server, it restores fine. Lots of stuff on google about this issue, some on stackoverflow about this issue, but nothing which is this exact situation. Any thoughts? It should be straightforward to do a backup and restore from one machine to another (having done this thousands of times in with SQL 6.5,7,2000,2005).

    Read the article

  • SQL Compact import DB from SQL Server Express with Server Management Studio

    - by Sasha
    Hi! I try to import sql script, generated with Server Management Studio, into SQL Compact 3.5 and get a lot of error. What I am doing wrong? I generate script with "Task/Generate Script" context menu. Part of my script: CREATE TABLE [LogMagazines]( [IdUser] [int] NOT NULL, [Text] [nvarchar](500) NULL, [TypeLog] [int] NOT NULL, [DateAndTime] [datetime] NOT NULL, [DetailMessage] [nvarchar](max) NULL, [Id] [int] IDENTITY(1,1) NOT NULL, CONSTRAINT [PK_LogMagazines] PRIMARY KEY ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Knowledge Base: http://stackoverflow.com/questions/1525063/how-to-import-data-in-sql-compact-edition http://stackoverflow.com/questions/1515969/exporting-data-in-sql-server-as-insert-into/1515975

    Read the article

  • Keep local MS SQL 2008 DB table and remote SQL Azure DB table in sync

    - by Boomerangertanger
    Hi there, I have a dedicated server which hosts a Windows Service which does a lot of very heavy load stuff and populates a number of SQL Server database tables. However, of all the database tables it populates and works with, I want only one to be synchronised with a remote SQL Azure DB table. This is because this table holds what I called Resolved data, which is the end result of the Windows Service's work. I would like to keep a SQL Azure database table in sync with this database table. As far as I understand, my options are: Move everything onto Azure (but that involves a massive development overhead and risk) Have another Windows Service on the dedicated server which essentially looks at changed records since the last update and then manually update the SQL Azure table

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • How to install SQL Server 2005 Configuration Manager without installing SQL Server Management Studio

    - by Arnold Zokas
    Hi, I need to configure SQL Server aliases on a public-facing production server. To do that, I need to install SQL Server Configuration Manager. I was not able to find a standalone installer for that, so I am having to install SQL Server 2005 Client Components. This approach is not ideal as we don't want to have SSMS on an public-facing production server. Is there a way to install SQL Server 2005 Configuration Manager without installing SQL Server Management Studio? Thanks, Arnold

    Read the article

  • SQL Server connection error (a weird one) (unsolved yet)

    - by Pinchy
    SQL Server can be connected from local system but can not be connected from remote system in the network. The error code is 40 from Visual Studio and 1326 when I try to connect to SQL Server from Management Studio. Firewall isn't the problem TCP/IP connection is enabled from SQL Server There are 2 pc terminals that can connect to the SQL Server but the 3th one cannot and using the same connection strings so the connection string is right It is SQL server 2000 any help will be appreciated thanks

    Read the article

  • An XEvent a Day (12 of 31) – Using the Extended Events SSMS Addin

    - by Jonathan Kehayias
    The lack of SSMS support for Extended Events, coupled with the fact that a number of the existing Events in SQL Trace were not implemented in SQL Server 2008, has no doubt been a key factor in its slow adoption rate. Since the release of SQL Server Denali CTP1, I have already seen a number of blog posts that talk about the introduction of Extended Events in SQL Server, because there is now a stub for it inside of SSMS. Don’t get excited yet, the functionality in CTP1 is very limited at this point,...(read more)

    Read the article

  • Keeping up with SQL Server cumulative updates

    - by AaronBertrand
    Yesterday, a conversation on twitter reminded me that I haven't been keeping up with posting cumulative updates. I missed these updates for SQL Server 2008 on March 15: Cumulative Update #7 for SQL Server 2008 SP1 (10.00.2766) Cumulative Update #10 for SQL Server 2008 RTM (10.00.1835) And yesterday Glenn Berry ( blog | twitter ) was the first I know of to blog about Cumulative Update #9 for SQL Server 2005 SP3 (9.00.4294). He also shares some interesting information about changes to the support policy...(read more)

    Read the article

  • SQL Query that can return intersecting data

    - by Alex
    I have a hard time finding a good question title - let me just show you what I have and what the desired outcome is. I hope this can be done in SQL (I have SQL Server 2008). 1) I have a table called Contacts and in that table I have fields like these: FirstName, LastName, CompanyName 2) Some demo data: FirstName LastName CompanyName John Smith Smith Corp Paul Wade Marc Andrews Microsoft Bill Gates Microsoft Steve Gibbs Smith Corp Diane Rowe ABC Inc. 3) I want to get an intersecting list of people and companies, but companies only once. This would look like this: Name ABC Inc. Bill Gates Diane Rowe John Smith Marc Andrews Microsoft Smith Corp Steve Gibbs Paul Wade Can I do this with SQL? How?

    Read the article

  • An XEvent a Day (25 of 31) – The Twelve Days of Christmas

    - by Jonathan Kehayias
    In the spirit of today’s holiday, a couple of people have been posting SQL related renditions of holiday songs.  Tim Mitchell posted his 12 days of SQL Christmas , and Paul Randal and Kimberly Tripp went as far as to record themselves sing SQL Carols on their blog post Our Christmas Gift To You: Paul and Kimberly Singing!   For today’s post on Extended Events I give you the 12 days of Christmas, Extended Events style (all of these are based on true facts about Extended Events in SQL Server)....(read more)

    Read the article

  • An XEvent a Day (21 of 31) – The Future – Tracking Blocking in Denali

    - by Jonathan Kehayias
    One of my favorite features that was added to SQL Server 2005 has been the Blocked Process Report trace event which collects an XML report whenever a process is blocked inside of the database engine longer than the user configurable threshold.  I wrote an article about this feature on SQL Server Central  two years ago titled Using the Blocked Process Report in SQL Server 2005/2008 .  One of the aspects of this feature is that it requires that you either have a SQL Trace running that...(read more)

    Read the article

  • An XEvent a Day (29 of 31) – The Future – Looking at Database Startup in Denali

    - by Jonathan Kehayias
    As I have said previously in this series, one of my favorite aspects of Extended Events is that it allows you to look at what is going on under the covers in SQL Server, at a level that has never previously been possible. SQL Server Denali CTP1 includes a number of new Events that expand on the information that we can learn about how SQL Server operates and in today’s blog post we’ll look at how we can use those Events to look at what happens when a database starts up inside of SQL Server. First...(read more)

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • How to sysprep SQL Server Express?

    - by Jim
    We plan to deploy Hyper-V VHD with Windows Server 2008 R2 and SQL Server 2012 Express installed to multiple hosts. From my understanding, the correct way to do this is to install SQL Server in prepartion mode, sysprep Windows, then complete SQL Server installation when the VHD is deployed. I mostly followed the process in this blog post: http://sethusrinivasan.com/category/sysprep/ However, after the VHD is deployed, I'm unable to complete the SQL Server installation. It keeps saying "Upgrade matrix is incorrect". It seems that it's trying to upgrade itself to Enterprise edition (I was asked for product key during install, but I skipped it). Could anyone share their experience in deploying VHDs with SQL Server (we're fine with either SQL Server 2008 R2 or 2012)? I think the source of my issue is because I can't select "Express Edition" when entering the product key at the completion stage, so the installation is trying to do an upgrade to Enterprise Edition. I have no idea why the drop down list is empty.

    Read the article

  • Kipróbálható az ingyenes új Oracle Data Miner 11gR2 grafikus workflow-val

    - by Fekete Zoltán
    Oracle Data Mining technológiai információs oldal. Oracle Data Miner 11g Release 2 - Early Adopter oldal. Megjelent, letöltheto és kipróbálható az Oracle Data Mining, az Oracle adatbányászat új grafikus felülete, az Oracle Data Miner 11gR2. Az Oracle Data Minerhez egyszeruen az SQL Developer-t kell letöltenünk, mivel az adatbányászati felület abból indítható. Az Oracle Data Mining az Oracle adatbáziskezelobe ágyazott adatbányászati motor, ami az Oracle Database Enterprise Edition opciója. Az adatbányászat az adattárházak elemzésének kifinomult eszköze és folyamata. Az Oracle Data Mining in-database-mining elonyeit felvonultatja: - nincs felesleges adatmozgatás, a teljes adatbányászati folyamatban az adatbázisban maradnak az adatok - az adatbányászati modellek is az Oracle adatbázisban vannak - az adatbányászati eredmények, cluster adatok, döntések, valószínuségek, stb. szintén az adatbázisban keletkeznek, és ott közvetlenül elemezhetoek Az új ingyenes Data Miner felület "hatalmas gazdagodáson" ment keresztül az elozo verzióhoz képest. - grafikus adatbányászati workflow szerkesztés és futtatás jelent meg! - továbbra is ingyenes - kibovült a felület - új elemzési lehetoségekkel bovült - az SQL Developer 3.0 felületrol indítható, ez megkönnyíti az adatbányászati funkciók meghívását az adatbázisból, ha épp nem a grafikus felületetet szeretnénk erre használni Az ingyenes Data Miner felület az Oracle SQL Developer kiterjesztéseként érheto el, így az elemzok közvetlenül dolgozhatnak az adatokkal az adatbázisban és a Data Miner grafikus felülettel is, építhetnek és kiértékelhetnek, futtathatnak modelleket, predikciókat tehetnek és elemezhetnek, támogatást kapva az adatbányászati módszertan megvalósítására. A korábbi Oracle Data Miner felület a Data Miner Classic néven fut és továbbra is letöltheto az OTN-rol. Az új Data Miner GUI-ból egy képernyokép: Milyen feladatokra ad megoldási lehetoséget az Oracle Data Mining: - ügyfél viselkedés megjövendölése, prediktálása - a "legjobb" ügyfelek eredményes megcélzása - ügyfél megtartás, elvándorlás kezelés (churn) - ügyfél szegmensek, klaszterek, profilok keresése és vizsgálata - anomáliák, visszaélések felderítése - stb.

    Read the article

  • System.InvalidOperationException compiling ASP.NET app on Mono

    - by Radu094
    This is the error I get when I start my ASP.NET application in Mono: System.InvalidOperationException: The process must exit before getting the requested information. at System.Diagnostics.Process.get_ExitCode () [0x00044] in /usr/src/mono-2.6.3/mcs/class/System/System.Diagnostics/Process.cs:149 at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode () at Mono.CSharp.CSharpCodeCompiler.CompileFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x001ee] in /usr/src/mono-2.6.3/mcs/class/System/Microsoft.CSharp/CSharpCodeCompiler.cs:267 at Mono.CSharp.CSharpCodeCompiler.CompileAssemblyFromFileBatch (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00011] in /usr/src/mono-2.6.3/mcs/class/System/Microsoft.CSharp/CSharpCodeCompiler.cs:156 at System.CodeDom.Compiler.CodeDomProvider.CompileAssemblyFromFile (System.CodeDom.Compiler.CompilerParameters options, System.String[] fileNames) [0x00014] in /usr/src/mono-2.6.3/mcs/class/System/System.CodeDom.Compiler/CodeDomProvider.cs:119 at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath, System.CodeDom.Compiler.CompilerParameters options) [0x0022f] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/AssemblyBuilder.cs:804 at System.Web.Compilation.AssemblyBuilder.BuildAssembly (System.Web.VirtualPath virtualPath) [0x00000] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/AssemblyBuilder.cs:730 at System.Web.Compilation.BuildManager.GenerateAssembly (System.Web.Compilation.AssemblyBuilder abuilder, System.Web.Compilation.BuildProviderGroup group, System.Web.VirtualPath vp, Boolean debug) [0x00254] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/BuildManager.cs:624 at System.Web.Compilation.BuildManager.BuildInner (System.Web.VirtualPath vp, Boolean debug) [0x0011c] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/BuildManager.cs:411 at System.Web.Compilation.BuildManager.Build (System.Web.VirtualPath vp) [0x00050] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/BuildManager.cs:356 at System.Web.Compilation.BuildManager.GetCompiledType (System.Web.VirtualPath virtualPath) [0x0003a] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/BuildManager.cs:803 at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath (System.Web.VirtualPath virtualPath, System.Type requiredBaseType) [0x0000c] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.Compilation/BuildManager.cs:500 at System.Web.UI.PageParser.GetCompiledPageInstance (System.String virtualPath, System.String inputFile, System.Web.HttpContext context) [0x0001c] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.UI/PageParser.cs:161 at System.Web.UI.PageHandlerFactory.GetHandler (System.Web.HttpContext context, System.String requestType, System.String url, System.String path) [0x00000] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web.UI/PageHandlerFactory.cs:45 at System.Web.HttpApplication.GetHandler (System.Web.HttpContext context, System.String url, Boolean ignoreContextHandler) [0x00055] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web/HttpApplication.cs:1643 at System.Web.HttpApplication.GetHandler (System.Web.HttpContext context, System.String url) [0x00000] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web/HttpApplication.cs:1624 at System.Web.HttpApplication+<Pipeline>c__Iterator2.MoveNext () [0x0075f] in /usr/src/mono-2.6.3/mcs/class/System.Web/System.Web/HttpApplication.cs:1259 I checked the source code indicated by the stacktrace, namely :CSharpCodeCompiler.cs:267 mcs.WaitForExit(); result.NativeCompilerReturnValue = mcs.ExitCode; //this throws the exception I have no ideea if this is a bug in Mono, or if my App is doing something it shoudn't. A simple "Hello World" application indicates that Mono is properly installed and working, It is just my app that is causing this exception to be thrown. Hoping some enlighted minds have more on the issue

    Read the article

  • Attaching databases of sql 2000

    - by jbp117
    I have few databses of sql 2000 on windows 2000. Can I attach these databases to another instance of SQL Server 2000 on anothermachine having windows 2003 installed?? Does attach and detach of databases are platform independent??

    Read the article

  • SQL Server 2005 Table Alter History

    - by Kayes
    Hi. Does SQL Server maintains any history to track table alterations like column add, delete, rename, type/ length change etc? I found many suggest to use stored procedures to do this manually. But I'm curious if SQL Server keeps such history in any system tables? Thanks.

    Read the article

  • Dump Microsoft SQL Server database to an SQL script

    - by Matt Sheppard
    Is there any way to export a Microsoft SQL Server database to an sql script? I'm looking for something which behaves similarly to mysqldump, taking a database name, and producing a single script which will recreate all the tables, stored procedures, reinsert all the data etc. I've seen http://vyaskn.tripod.com/code.htm#inserts, but I ideally want something to recreate everything (not just the data) which works in a single step to produce the final script.

    Read the article

  • DiscountASP.NET Launches SQL Server Profiling as a Service

    - by wisecarver
    DiscountASP.NET announces enhancing our SQL Server hosting with the launch of SQL Server Profiling as a service. SQL Profiler is a powerful tool that allows the application and database developer to troubleshoot general SQL locking problems, performance issues, and perform database tuning. With our SQL Profiling as a Service customers can schedule a database trace at a specific time of their choosing and offers a new way to help our customers troubleshoot. For more information, visit: http://www...(read more)

    Read the article

  • DiscountASP.NET Launches SQL Server Profiling as a Service

    - by wisecarver
    DiscountASP.NET announces enhancing our SQL Server hosting with the launch of SQL Server Profiling as a service. SQL Profiler is a powerful tool that allows the application and database developer to troubleshoot general SQL locking problems, performance issues, and perform database tuning. With our SQL Profiling as a Service customers can schedule a database trace at a specific time of their choosing and offers a new way to help our customers troubleshoot. For more information, visit: http://www...(read more)

    Read the article

  • SQL Server 2012 content on Channel 9

    - by jamiet
    A mountain of SQL Server 2012 video content featuring Greg Low, Jonathan Kehayias, Joe Sack and Roger Doherty has just been released on Channel 9. Channel 9 has great support for tags and RSS feeds so if you want to automatically download all of that content simply you can add the following RSS feed: http://channel9.msdn.com/Tags/sql+server+2012/RSS to your podcast reader of choice and have fun learning about all the new features in SQL Server 2012 such as: AlwaysOn Power View SSDT SSRS Data Alerts SSAS Tabular Modelling DAX Improvements MDS improvements SSIS improvements DQS StreamInsight improvements Data-Tier Apps (DACs) LocalDB FileTable Spatial improvements T-SQL paging Distributed Replay XEvents improvements ADO.Net Code-first T-SQL improvements Server roles Partitioning improvements ColumnStore Whew, quite a list! @jamiet

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >