Search Results

Search found 847 results on 34 pages for 'sqlserver'.

Page 25/34 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • Schema compare with MS Data Tools in VS2008

    - by rdkleine
    When performing a schema compare having db_owner rights on the target database results in the following error: The user does not have permission to perform this action. Using the SQL Server Profiler I figured out this error occurs executing a query targeting the master db view: [sys].[dm_database_encryption_keys] While specifically ignoring all object types but Tables one would presume the SQL Compare doesn't need access to the db encryption keys. Also note: http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/c11a5f8a-b9cc-454f-ba77-e1c69141d64b/ One solution would be to GRANT VIEW SERVER STATE to the db user, but in my case I'm not hosting the database services and won't get the rights to the server state. Also tried excluding DatabaseEncryptionKey element in the compare file. <PropertyElementName> <Name>Microsoft.Data.Schema.Sql.SchemaModel.SqlServer.ISql100DatabaseEncryptionKey</Name> <Value>ExcludedType</Value> </PropertyElementName> Anyone has an workaround this?

    Read the article

  • SyncFramework upgrade from 1.0 to 2.0 Sql Server CE database change tracking issue

    - by Andronicus
    I'm trying to upgrade an application that uses Sync Framework 1.0 to synchronise a SqlServerCe database with SqlServer 2005. On the client, the existing database already has change tracking enabled, but when the sync is initiated SyncFramework 2.0 fails to find the last Sync Received anchor and then tries to re=initialize the Change tracking, which fails. I get the exception... {System.Exception} = {"The specified change tracking operation is not supported. To carry out this operation on the table, disable the change tracking on the table, and enable the change tracking."} It seems like all I can do is delete the local database and recreate it. Which is not a great solution for us, since some of the data in the clients database is not synced with the server, and our users would prefer not to loose this data in the upgrade. Is there any reason why SyncFramework 2.0 cannot locate the existing Last received sync anchor?

    Read the article

  • TIBCO ActiveDatabase Error

    - by George
    Folks, I am with the following error when trying to start an instance of the Publish ActiveDatabase. I'm using Designer 5.7.2 and trying to access a database of SqlServer 2008. Someone saw this error before? I don´t find any reference in Google or other sites! Adaptador_M2M.Adaptador_M2M Error [Adapter] AEADB-910005 Startup Error. SDK Exception Code = AESDKC-0087, Category = Metadata, Severity = errorRole, Description = Class description not available for: ADB_PREREGLISTENER, File = C:/suren/workspace/Maverick/maverick-5.6.1-dev/libmaverick/MInstanceImpl.cpp, line = 71 received on starting the adapter after initialization. The Repository URL is D:\TEMP\AT_adadb_61214.dat and the Configuration URL is Corporativo/IntegracaoM2M/Adaptadores/Adaptador_M2M.

    Read the article

  • TIBCO Designer - Active Database Error

    - by George
    Folks, I am with the following error when trying to start an instance of the Publish ActiveDatabase. I'm using Designer 5.7.2 and trying to access a database of SqlServer 2008. Someone saw this error before? I don´t find any reference in Google or other sites! Adaptador_M2M.Adaptador_M2M Error [Adapter] AEADB-910005 Startup Error. SDK Exception Code = AESDKC-0087, Category = Metadata, Severity = errorRole, Description = Class description not available for: ADB_PREREGLISTENER, File = C:/suren/workspace/Maverick/maverick-5.6.1-dev/libmaverick/MInstanceImpl.cpp, line = 71 received on starting the adapter after initialization. The Repository URL is D:\TEMP\AT_adadb_61214.dat and the Configuration URL is Corporativo/IntegracaoM2M/Adaptadores/Adaptador_M2M.

    Read the article

  • ASPNETDB and ASPSTATE database. How to change the connectionstrings?

    - by George
    I have two ASP-specific SQL Server databases 1) ASPState - To store session state 2) ASPNETDB - To store Security/Role stuff. In my web.config, I am specifying the connection string used to identify the location of the APState database: <sessionState mode="SQLServer" sqlConnectionString="server=(local)\sql2008b;uid=sa;pwd=iainttelling;" timeout="120"/> Where is the conenction string specified for the ASPNETDB database? I am trying to point it to a db on a remote server. I have a feeling it is somewhere in IIS orthe Machine Config. I'd like to add it to my WEB.CONFIG Could someone help me to do this?

    Read the article

  • MSSQL2008: DTC Transaction - Internal abort

    - by Teutales
    Hi all, I write a small own replication - a trigger which fires an DTC INSERT to another server (one reason for my own "replication": while trigger is running it calculates some data, another: it works from an express version to an express version). When I do the initial insert from the same Host with the windows authentification it works fine. But there is a webserver on another host, which uses the sqlserver login (for testing sa). When this Host do the initial insert I get a Internal abort after the entlisting and creating phase in the DTCTransaction EventClass (Profiler). The magic is: When I first fire it from the same Host with the windows authentification, I can fire it from the webserver and it works fine. But I just have to wait some minutes and it won't work. Where is my error in reasoning... Thanks! Greetz Teutales Here is my initial server script: EXEC master.dbo.sp_addlinkedserver @server = @Servername, @srvproduct=N'SQL Server' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = @Servername, @locallogin = NULL , @useself = N'False', @rmtuser = @Serverlogin, @rmtpassword = @Serverpwd

    Read the article

  • How do you fix TSD02016 error in Database

    - by Keith Sirmons
    Howdy, I have a database I am using the Visual Studio 2010 Database Project tool vsdbcmd.exe to create a schema from. vsdbcmd /a:Import /dsp:Sql /model:"Database" /cs:"Server=SqlServer; Initial Catalog=DatabaseName; Integrated Security=SSPI;" The tool is reporting an error: Error TSD02016, Gen-259 (12,50) The column name is not valid. No table name was specified. How would I go about pinpointing where this error is originating? I have found one resource on the internet (http://social.msdn.microsoft.com....) that points to a possibility of a keyword used incorrectly, but the error messages are not the same. What is Gen-259? Thank you, Keith

    Read the article

  • Timer or Thread?

    - by Pradeep Singh
    I have a method(say method1) that writes to database(sqlserver)and another method(say method2) that tries to access the same database after some time and updates the data row that was created by method1. The problem arises when method1 fails to access db due to the LAN being disconnected (this is not an exception this is a scenario that will definitely arise in my software, getting into details will make the question too complex) if method1 fails to access db method2 cannot work. What I want to do is to make method1 store values to local db instead of server if the LAN is disconnected and as soon as it enters value in local db the application should start trying to access the server after ever 10-15 seconds. What should I use timer or create a new thread?

    Read the article

  • How to create a UDF that takes a query string and returns the query's resultset

    - by Martin
    I want to create a stored procedure that takes a simple SELECT statement and return the resultset as a CSV string. So the basic idea is get the sql statement from user input, run it using EXEC(@stmt) and convert the resultset to text using cursors. However, as SQLServer doesn't allow: select * from storedprocedure(@sqlStmt) UDF with EXEC(@sqlStmt) so I tried Insert into #tempTable EXEC(@sqlStmt), but this doesn't work (error = "invalid object name #tempTable"). I'm stuck. Could you please shed some light on this matter? Many thanks EDIT: Actually the output (e.g CSV string) is not important. The problem is I don't know how to assign a cursor to the resultset returned by EXEC. SP and UDF do not work with Exec() while creating a temp table before inserting values is impossible without knowing the input statement. I thought of OPENQUERY but it does not accept variables as its parameters.

    Read the article

  • IBM WebSphere vs Oracle Fusion

    - by Hal
    I have been asked to research the advantages/disadvantages the two application servers, but am new to the space and am having a terrible time finding an unbiased comparison of the two platforms. I understand that this is a broad question and I hate that I can't give a very specific use case (other than it will be an implementation in an organization with out a full time admin dedicated to management and it will be running in a mixed environment against JD Edwards/Oracle and SQLServer). Does anyone know of any (recently published) content that does a reasonable comparison or can any offer any insight into which might be the better choice and why. Any help would be greatly appreciated. Best regards, Hal

    Read the article

  • Connect to SQL Server Compact Edition in SSIS

    - by user218795
    Till now I've been exporting data INTO the SQL Server CE database, but now in SSIS I'm attempting to source data that was inserted in a previous step of the import. The SSIS fails during validation with the following error Error: Microsoft.SqlServer.Dts.Runtime.DtsCouldNotCreateManagedConnectionException: Could not create a managed connection manager. I am attempting to use the ADO NET Source that connects to the Data Source using the Microsoft.SqlServerCe.Client.3.5 provider (i.e. ADO.NET connection manager). I CAN preview the data in the ADO.NET Source editor, but the package validation fails due to the above error.

    Read the article

  • Can I modify package.xml file in SQL bootstrapper to install a named SQL server instance

    - by jonmiddleton
    I want to use the SqlExpress2008 Bootstrapper for a new installation on Windows7, I do not want to use the default SQLEXPRESS Instance. I have attempted to edit the package.xml file located in: C:\Program Files\Microsoft SDKs\Windows\v7.0A\Bootstrapper\Packages\SqlExpress2008\en\package.xml and updated the command argument instancename=CUSTOMINSTANCE But unfortunately it still creates the default SQLEXPRESS not CUSTOMINSTANCE The wix tag is as follows: <sql:SqlDatabase Id="SqlDatabaseCore" ConfirmOverwrite="yes" ContinueOnError="no" CreateOnInstall="yes" CreateOnReinstall="no" CreateOnUninstall="no" Database="MyDatabase" DropOnInstall="no" DropOnReinstall="no" DropOnUninstall="no" Instance="[SQLINSTANCE]" Server="[SQLSERVER]"> <sql:SqlFileSpec Id="SqlFileSpecCore" Filename="[CommonAppDataFolder]MyCompany\Database\MyDatabase.mdf" Name="MyDatabase" /> <sql:SqlLogFileSpec Id="SqlLogFileSpecCore" Filename="[CommonAppDataFolder]MyCompany\Database\MyDatabase.ldf" Name="MyDatabaseLog" /> Is this the standard way to accomplish this?

    Read the article

  • How to execute T-SQL for several databases which names are stored on a table

    - by emzero
    Hey guys, so here's the deal. I have several databases (SqlServer 2005) on the same server with the same schema but different data. I have one extra database which has one table storing the names of the mentioned databases. So what I need to do is to iterate over those databases name and actually "switch" to each one (use [dbname]) and execute a T-SQL script. Am I clear? Let me give you an example (simplified from the real one): CREATE TABLE DatabaseNames ( Id int, Name varchar(50) ) INSERT INTO DatabaseNames SELECT 'DatabaseA' INSERT INTO DatabaseNames SELECT 'DatabaseB' INSERT INTO DatabaseNames SELECT 'DatabaseC' Assume that DatabaseA, DatabaseB and DatabaseC are real existing databases. So let's say I need to create a new SP on those DBs. I need some script that loops over those databases and executes the T-SQL script I specify (maybe stored on a varchar variable or wherever). Any ideas? Thanks!

    Read the article

  • Troubleshooting source of heavy resource-usage on a server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • How do you tell if your migrations are up to date with migratordotnet?

    - by Lance Fisher
    I'm using migratordotnet to manage my database migrations. I'm running them on application setup like this, but I would also like to check on application startup that the migrations are up to date, and provide the option to migrate to latest. How do I tell if there are available migrations that need to be applied? I see that I can get the migrations that were applied like this var asm = Assembly.GetAssembly(typeof(Migration_0001)); var migrator = new Migrator.Migrator("SqlServer", setupInfo.DatabaseConnectionString, asm); var applied = migrator.AppliedMigrations; I like to do something like this: var available = migrator.AvailableMigrations; //this property does not exist.

    Read the article

  • updateBinaryStream on ResultSet with FileStream

    - by Lieven Cardoen
    If I try to update a FileStream column I get following error: com.microsoft.sqlserver.jdbc.SQLServerException: The result set is not updatable. Code: System.out.print("Now, let's update the filestream data."); FileInputStream iStream = new FileInputStream("c:\\testFile.mp3"); rs.updateBinaryStream(2, iStream, -1); rs.updateRow(); iStream.close(); Why is this? Table in Sql Server 2008: CREATE TABLE [BinaryAssets].[BinaryAssetFiles]( [BinaryAssetFileId] [int] IDENTITY(1,1) NOT NULL, [FileStreamId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Blob] [varbinary](max) FILESTREAM NULL, [HashCode] [nvarchar](100) NOT NULL, [Size] [int] NOT NULL, [BinaryAssetExtensionId] [int] NOT NULL, Query used in Java: String strCmd = "select BinaryAssetFileId, Blob from BinaryAssets.BinaryAssetFiles where BinaryAssetFileId = 1"; stmt = con.createStatement(); rs = stmt.executeQuery(strCmd);

    Read the article

  • Connect rails application to MsSQL 2005 from Windows

    - by Enrico Carlesso
    Hi guys. I (sadly) have to deploy a rails application on Windows XP which has to connect to Microsoft SQL Server 2005. Surfing in the web there are a lot of hits for connect from Linux to MsSQL, but cannot find out how to do it from Windows. Basically I followed these steps: Install dbi gem Install activerecord-sql-server-adapter gem My database.yml now looks like this: development: adapter: sqlserver mode: odbc dsn: test_dj host: HOSTNAME\SQLEXPRESS database: test_dj username: guest password: guest But I'm unable to connect it. When I run rake db:migrate I get IM002 (0) [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified I'm not a Windows user, so cannot understand really well the meaning of dsn element or so. Does someone have an idea how to solve this? Thanks in advance

    Read the article

  • Compare two String with MySQL

    - by Scorpi0
    Hi, I wan't to compare two strings in a SQL request so I can retrieve the best match, the aim is to propose to an operator the best zip code possible. For example, in France, we have Integer Zip code, so I made an easy request : SELECT * FROM myTable ORDER BY abs(zip_code - 75000) This request returns first the data closest of Paris. Unfortunatelly, United Kingdom have zip code like AB421RS, so my request can't do it. I see in SQL Server a function 'Difference' : http://www.java2s.com/Code/SQLServer/String-Functions/DIFFERENCEworkoutwhenonestringsoundssimilartoanotherstring.htm But I use MySQL.. Is there anyone who have a good idea to do the trick in one simple request ? PS : the Levenshtein Distance will not do it, as I really wan't to compare string like if they were number. ABCDEF have to be closer to AWXYZ than to ZBCDEF.

    Read the article

  • How to implement paging in NHibernate with a left join query

    - by Gabe Moothart
    I have an NHibernate query that looks like this: var query = Session.CreateQuery(@" select o from Order o left join o.Products p where (o.CompanyId = :companyId) AND (p.Status = :processing) order by o.UpdatedOn desc") .SetParameter("companyId", companyId) .SetParameter("processing", Status.Processing) .SetResultTransformer(Transformers.DistinctRootEntity); var data = query.List<Order>(); I want to implement paging for this query, so I only return x rows instead of the entire result set. I know about SetMaxResults() and SetFirstResult(), but because of the left join and DistinctRootEntity, that could return less than x Orders. I tried "select distinct o" as well, but the sql that is generated for that (using the sqlserver 2008 dialect) seems to ignore the distinct for pages after the first one (I think this is the problem). What is the best way to accomplish this?

    Read the article

  • Changing the Column order/adding Newcolumn for existing Table in SQLServer2008

    - by rmdussa
    Hi I have situation where I need to change the order of the columns/adding new columns for existing Table in SQLServer2008, It is not allowing me to do without drop and recreate. But That is in production system and having data in that table. I can take backup of the data, and Drop the existing table and change the order/add new columns and recreate it, insert the backup data into new Table. Is there any best way to do this without dropping and recreating.I think SQLServer 2005 will allow this process without dropping and recreating while changing to existing table structure. Thanks

    Read the article

  • Server database -> client database update based on version

    - by user296191
    Hi, What is the recommended method of collecting items in a server database, versioning the database then deploying only the version differences to a client ? Should it by a field in the table (ie. Version: 3.3.9876) against each record ? Should it be DateTime (server based) in each record ? And whats the best way to just deploy the changes to a client with an older version of the database ? Is it a DUMP to a file with a Bulk import of some description ? Open to comments.. Suggestions. Database can be anything (firebird, mysql, sqlserver, sqlite)... Any info greatly appreciated.

    Read the article

  • Dataset field DBNull -> int?

    - by BobClegg
    SQLServer int field. Value sometimes null. DataAdapter fills dataset OK and can display data in DatagridView OK. When trying to retrieve the data programmatically from the dataset the Dataset field retrieval code throws a StronglyTypedException error. [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public int curr_reading { get { try { return ((int)(this[this.tableHistory.curr_readingColumn])); } catch (global::System.InvalidCastException e) { throw new global::System.Data.StrongTypingException("The value for column \'curr_reading\' in table \'History\' is DBNull.", e); } Got past this by checking for DBNull in the get accessor and returning null but... When the dataset structure is modified (Still developing) my changes (unsurprisingly) are gone. What is the best way to handle this situation? It seems I am stuck with dealing with it at the dataset level. Is there some sort of attribute that can tell the auto code generator to leave the changes in place?

    Read the article

  • How to execute T-SQL for several databases whose names are stored in a table

    - by emzero
    Hey guys, so here's the deal. I have several databases (SqlServer 2005) on the same server with the same schema but different data. I have one extra database which has one table storing the names of the mentioned databases. So what I need to do is to iterate over those databases name and actually "switch" to each one (use [dbname]) and execute a T-SQL script. Am I clear? Let me give you an example (simplified from the real one): CREATE TABLE DatabaseNames ( Id int, Name varchar(50) ) INSERT INTO DatabaseNames SELECT 'DatabaseA' INSERT INTO DatabaseNames SELECT 'DatabaseB' INSERT INTO DatabaseNames SELECT 'DatabaseC' Assume that DatabaseA, DatabaseB and DatabaseC are real existing databases. So let's say I need to create a new SP on those DBs. I need some script that loops over those databases and executes the T-SQL script I specify (maybe stored on a varchar variable or wherever). Any ideas? Thanks!

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >