Search Results

Search found 101927 results on 4078 pages for 'ms sql server'.

Page 550/4078 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Fun with upgrading and BCP

    - by DavidWimbush
    I just had trouble with using BCP out via xp_cmdshell. Probably serves me right but that's a different issue. I got a strange error message 'Unable to resolve column level collations' which turned out to be a bit misleading. I wasted some time comparing the collations of the the server, the database and all the columns in the query. I got so desperate that I even read the Books Online article. Still no joy but then I tried the interweb. It turns out that calling bcp without qualifying it with a path causes Windows to search the folders listed in the Path environment variable - in that order - and execute the first version of BCP it can find. But when you do an in-place version upgrade, the new paths are added on the end of the Path variable so you don't get the latest version of BCP by default. To check which version you're getting execute bcp -v at the command line. The version number will correspond to SQL Server version numbering (eg. 10.50.n = 2008 R2). To examine and/or edit the Path variable, right-click on My Computer, select Properties, go to the Advanced tab and click on the Environment Variables button. If you change the variable you'll have to restart the SQL Server service before it takes effect.

    Read the article

  • Generic Repository with SQLite and SQL Compact Databases

    - by Andrew Petersen
    I am creating a project that has a mobile app (Xamarin.Android) using a SQLite database and a WPF application (Code First Entity Framework 5) using a SQL Compact database. This project will even eventually have a SQL Server database as well. Because of this I am trying to create a generic repository, so that I can pass in the correct context depending on which application is making the request. The issue I ran into is my DataContext for the SQL Compact database inherits from DbContext and the SQLite database inherits from SQLiteConnection. What is the best way to make this generic, so that it doesn't matter what kind of database is on the back end? This is what I have tried so far on the SQL Compact side: public interface IRepository<TEntity> { TEntity Add(TEntity entity); } public class Repository<TEntity, TContext> : IRepository<TEntity>, IDisposable where TEntity : class where TContext : DbContext { private readonly TContext _context; public Repository(DbContext dbContext) { _context = dbContext as TContext; } public virtual TEntity Add(TEntity entity) { return _context.Set<TEntity>().Add(entity); } } And on the SQLite side: public class ElverDatabase : SQLiteConnection { static readonly object Locker = new object(); public ElverDatabase(string path) : base(path) { CreateTable<Ticket>(); } public int Add<T>(T item) where T : IBusinessEntity { lock (Locker) { return Insert(item); } } }

    Read the article

  • Unable to sync time using `ntpdate`, error: "no server suitable for synchronization found"

    - by William Ting
    My ntp.conf file: user@pc[0][07:37:40]:/etc$ cat /etc/ntp.conf idriftfile /var/lib/ntp/ntp.drift server 0.pool.ntp.org server 1.pool.ntp.org server 2.pool.ntp.org server pool.ntp.org Command output: user@pc[0][07:37:24]:/etc$ sudo ntpdate -dv pool.ntp.org 18 Jun 07:37:35 ntpdate[10737]: ntpdate [email protected] Tue Apr 19 07:15:05 UTC 2011 (1) Looking for host pool.ntp.org and service ntp host found : conquest.kjsl.com transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) transmit(198.137.202.16) transmit(216.45.57.38) transmit(64.6.144.6) 198.137.202.16: Server dropped: no data 216.45.57.38: Server dropped: no data 64.6.144.6: Server dropped: no data server 198.137.202.16, port 123 stratum 0, precision 0, leap 00, trust 000 refid [198.137.202.16], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.1f16c1e3 Sat, Jun 18 2011 7:37:39.121 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 216.45.57.38, port 123 stratum 0, precision 0, leap 00, trust 000 refid [216.45.57.38], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.524a05dd Sat, Jun 18 2011 7:37:39.321 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 64.6.144.6, port 123 stratum 0, precision 0, leap 00, trust 000 refid [64.6.144.6], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.524a05dd Sat, Jun 18 2011 7:37:39.321 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 server 64.6.144.6, port 123 stratum 0, precision 0, leap 00, trust 000 refid [64.6.144.6], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 0:28:16.000 transmit timestamp: d1a71a93.857c6fbd Sat, Jun 18 2011 7:37:39.521 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 18 Jun 07:37:40 ntpdate[10737]: no server suitable for synchronization found

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Server Migration Checklist II

    - by merrillaldrich
    Easy Breezy Login Audit for your Ol’ 2000 Server In the last post on this topic I put up the preparatory steps I’ve been using for server migrations. Here I am posting some code that has worked well for us to trace who/what is connecting to our older SQL Server 2000 machines. It’s a simple audit of login events, tracing the login name, host name, database, and last login time for connections to the server, and gave us valuable insight into who was really using the machines and which databases might...(read more)

    Read the article

  • What do you use for a RAM disk on Windows Server?

    - by thelsdj
    We currently use AR Soft RAM Disk on some Windows 2003 servers for storing short lived temporary files. Looking forward to a move to 64-bit Windows Server 2008 I'm wondering what options there are for a RAM disk since it appears AR Soft RAM Disk was discontinued in 2005. I'm not looking for any physical disk backing, just a pure RAM disk that appears like a normal drive to Windows. Does anyone have any experience with RAM disks on Windows Server 2008, especially for 64-bit?

    Read the article

  • Best Practices for "temporary" accounts in a Windows server environment?

    - by Millman
    Are there any best practices for "temporary worker" accounts in a Windows server environment? We have a couple of contractors joining the organization temporarily. They only need access to a few folders. Aside from joining them to the "Domain Guests" group and granting them access only to the folders specified. Are there any other issues to be aware of? We are in a Windows Server 2003 domain environment.

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • Use Those Schemas, People!

    - by BuckWoody
    Database Schemas are just containers – they aren’t users or anything else – think of a sub-directory on the hard drive. In early versions of SQL Server we “hid” schemas, placing all objects under “dbo”, which gave the erroneous perception that Schemas are users. In SQL Server 2005, we “un-hid” or re-introduced schemas within the database. Users can have a default schema (a place where their new objects go), you can add new schemas and transfer objects between them, and they have many other benefits. But I still see a lot of applications, developed by shops I know as well as vendors, that don’t make use of a Schema. Everything is piled under dbo. I completely understand this – since permissions can be granted to a schema, they feel a lot like a user, so it’s just easier not to worry about both users and schemas when you create a database. But if you’ll use them properly you can make your application more understandable and portable. You should at least take a few minutes and read more about them – you owe it to your users: http://msdn.microsoft.com/en-us/library/ms190387.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The Exceptional EXCEPT clause

    - by steveh99999
    Ok, I exaggerate, but it can be useful… I came across some ‘poorly-written’ stored procedures on a SQL server recently, that were using sp_xml_preparedocument. Unfortunately these procs were  not properly removing the memory allocated to XML structures – ie they were not subsequently calling sp_xml_removedocument… I needed a quick way of identifying on the server how many stored procedures this affected.. Here’s what I used.. EXEC sp_msforeachdb 'USE ? SELECT DB_NAME(),OBJECT_NAME(s1.id) FROM syscomments s1 WHERE [text] LIKE ''%sp_xml_preparedocument%'' EXCEPT SELECT DB_NAME(),OBJECT_NAME(s2.id) FROM syscomments s2 WHERE [text] LIKE ''%sp_xml_removedocument%'' ‘ There’s three nice features about the code above… 1. It uses sp_msforeachdb. There’s a nice blog on this statement here 2. It uses the EXCEPT clause.  So in the above query I get all the procedures which include the sp_xml_preparedocument string, but by using the EXCEPT clause I remove all the procedures which contain sp_xml_removedocument.  Read more about EXCEPT here 3. It can be used to quickly identify incorrect usage of sp_xml_preparedocument. Read more about this here The above query isn’t perfect – I’m not properly parsing the SQL text to ignore comments for example - but for the quick analysis I needed to perform, it was just the job…

    Read the article

  • mysql.sock problem on Mac OS X, all Zend products

    - by Michael Stelly
    Hi folks. I posted this on the Zend forum, but I'm hoping I can get a speedier reply here. I've tried every solution provided on this forum with no luck. When I restart mysql, everything appears ok. sudo /usr/local/zend/bin/zendctl.sh restart Password: /usr/local/zend/bin/apachectl stop [OK] /usr/local/zend/bin/apachectl start [OK] Stopping Zend Server GUI [Lighttpd] [OK] spawn-fcgi: child spawned successfully: PID: 7943 Starting Zend Server GUI [Lighttpd] [OK] Stopping Java bridge [OK] Starting Java bridge [OK] Shutting down MySQL . SUCCESS! Starting MySQL . SUCCESS! Pinging locahost is also OK and resolve dns to IP. ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.048 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.066 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.076 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.064 ms But when I attempt to access the local url for my app, I get the dreaded: Message: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2). This is a show-stopper for me. I appreciate any assistance. Thank you.

    Read the article

  • Strategy for clients to retrieve real-time log from HTTP server

    - by Jerry Dodge
    I have an HTTP Server Service application which has its own logging mechanism. It's written in Delphi. I would like to provide a way for multiple clients to connect to this service and get a real-time update of the log. The log in the service moves rather fast, there's a lot of things to log. There may be up to 50 messages within 1 second at times. The existing log which is already implemented is not saved, it's only kept in the memory of the server service - where I will need to distribute it to any client which needs it. Once all clients have a log message, it should be deleted. I intend to use HTTP to "ask" the server for the log, and respond with an XML packet. The connections are not keep-alive. The only problem is, the server should only send the client those log records which it needs, not everything. I have no way of the server pushing the log to the clients in real-time, so each client needs to repeatedly ask the server for the latest log records. This HTTP Server is very lightweight, and there is no session management. There isn't even any type of authentication. The only way I see is for a client to register its self on the server, and whenever a log is issued on the server, it creates a copy of the log for each client, where each client has a log queue (string list). However, suppose there are 100 clients connected and expecting to receive this log. That means the server must create 100 copies of each log, add this log to the end of each client log queue, and wait for the client to request it. At that point, when the server replies with the XML log, it should flush (delete) whatever's in the queue. I'm worried however that this could cause memory issues. Each client log queue might get 100 log messages before the client requests the latest logs. How should I go about doing this in the fastest way possible without hindering the performance of the server? I'm trying to avoid having to create a copy of each log for each client.

    Read the article

  • How many copies of files are needed by video server?

    - by Trilok
    A quick question. How many copies of the same movie are kept in a video server (a video streaming server)? Suppose a particular video is at max requested by 1000 users at the same instant of time, how many copies would be sufficient so that parallel streams can be provided to each user? Ideally 1 copy would solve the purpose, but what is the optimum number keeping the bandwidth and simultaneous access in mind?

    Read the article

  • Can I customize the Summary Network Report in Windows Server 2008?

    - by Xavier
    Hi Guys, We get a weekly Summary Network Report from our SBS 2008 server, delivered by email. The report contains many alerts. We want to ignore some of them so that the report is all green and any alert will stand out. For example, we want to ignore the alert regarding the firewall being off on the server. Is there a place where I can select what points to check and the level of some alerts (such as low remaining disk space), etc?

    Read the article

  • What benefit do I get from using a 64-bit server?

    - by blockhead
    I bought a small 256MB slice from slicehost and installed Ubuntu 10.04 64bit and wordpress on it. Performance was dismal as apache was eating up all my memory. Once I did some taming of apache and switched to fCGI things ran fine. Next I rebuilt as a 32 bit server, and performance was much better. What benefit would I get from a 64 bit server. Is it all about the memory?

    Read the article

  • Windows Server 2008 R2 - 180 day evaluation vs. 10 days + 5 re-arms

    - by Rob
    The content on the Server 2008 R2 Trial Software page states that it can be evaluated for upto 180 days, however on a test machine we installed last week, it's requesting "re-arming" every 10 days, which seems to be do-able a maximum of 5 times? How do we get it to last more than 50 days, as it'd be a pain to have to rebuild the server concerned!

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >