Search Results

Search found 126374 results on 5055 pages for 'windows server 2003 r2'.

Page 61/5055 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Windows 2008 R2: can't extend C drive, mystery partitions

    - by wfaulk
    I have a Windows 2008 R2 server running under VMware ESX 4.0.0. I have reallocated disk space to it in order to extend the C drive, but Disk Management has "Extend Volume" greyed out. DISKPART shows more partitions than Disk Management shows, including one after the volume I'm trying to extend, which would explain why Disk Management isn't allowing the extension. Disk Management shows: System Reserved / 100MB NTFS / Healthy (System) (C:) / 39.39 GB NTFS / Healthy (Boot, Page File, Crash Dump) 10.00 GB / Unallocated DISKPART shows: Partition 1 Dynamic Data 992 KB 31 KB Partition 2 Dynamic Data 100 MB 1024 KB Partition 3 Dynamic Data 39 GB 101 MB Partition 4 Dynamic Data 1024 KB 39 GB My question at this point is: what the heck are partitions 1 and 4, where did they come from, why doesn't Disk Management show them, and, most importantly, can I delete partition 4 in order to extend partition 3?

    Read the article

  • Windows Server 2008 R2 and Keyboards

    - by Brian
    Hello, I have a machine with WIN server 2008 R2 installed, when the machine boots up, it says keyboard failure. The keyboard I had was from an older machine and was PS/2, I got a PS/2 to USB converter in order for it to work, but it says keyboard failure and doesn't work. Is it because it's pretty old and that's it, or does it have to be USB? I'm going to look into a new one, but want to make sure I don't get this issue again... Thanks.

    Read the article

  • Hyper-V 2008 R2 Install Question

    - by Bill
    I have 500GB HDD that I installed on the server. If I am going to load Hyper-V r2 on the bare system, do I set the partition to use all this space or is there a recommend smaller partition size I should set for Hyper-V to run within? This is my first time loading Hyper-V bare to the system. I feel like I should be able to create a small partition of like 40GB for Hyper-V to run within. Then create a second larger partion to store my VM images. Any thoughts or guidance on this?

    Read the article

  • Slow Web Performance on two Windows 2008 R2 Terminal Servers

    - by Frank Owen
    We have two Windows 2008 R2 servers that we use for agents to log into to access our customers systems. Saturday morning we received complaints that on both servers the web is running horribly slow. This happens on all websites and the majority of the time the web site times out trying to load. Other users located at the same site but using their desktop machine do not see any issue. We have rebooted the boxes and checked settings and cannot find the cause. The CPU/Memory/Network/Disk Space use on the server is very low. I thought it might have been a MS update causing the issue but it appears the last update was applied in January. We have rebooted both boxes and I am in process of trying a different browser. Any ideas what could be causing this?

    Read the article

  • How to enable an active/active file server cluster in windows 2008 r2 Enterprise

    - by Phygg
    I've just created a cluster for my file servers in Windows 2008 R2 Ent SP1 environment. The goal - an Active/Active cluster for web server data How do I go about telling the cluster to be active for both nodes? Do I have to tell the cluster to be active/active? Here is a link to the instructions I followed when configuring the failover cluster. http://technet.microsoft.com/en-us/library/ff182326(WS.10).aspx So if anyone can help me to grasp the concept or maybe I'm way off and I need a node that is not active along with 2 active nodes to do this, I would appreciate it.

    Read the article

  • DNS Pointer to old server name

    - by TechKnow Dude
    We have a SBS2003 server that was migrated to a new hardware platform, the computer name has changed but the domain is the same. The desktop's are trying to do offline files to the old server name. There is a nslookup entry for the old server name and a DNS entry for the old server. How do we safely remove the old DNS entry without breaking the computer offline folder storage locally. Can we change the pointing location on the offline file storage to point to the new server name.

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Replication: SQL Server 2008 Publisher with SQL Server Express 2005 Subscriber

    - by Jeremy
    Here is the setup: SQL Server 2008 Enterprise Server with a Merge Publication. SQL Server 2005 Express with pull subscription. There is no web or ftp setup. This is direct merge replication. Using the RMO objects from C#, I get a "class cannot be found." COM Error when accessing the MergePullSubscription.SynchronizationAgent property. I've tried with both the 2008 RMO dll's (version 10 dll's) and the 2005 RMO dll's (version 9 dll's). When trying to use replmerge.exe, I get the following: 2010-04-10 04:12:05.263 Microsoft SQL Server Merge Agent 9.00.1399.06 2010-04-10 04:12:05.294 Copyright (c) 2000 Microsoft Corporation 2010-04-10 04:12:05.294 2010-04-10 04:12:05.294 The timestamps prepended to the output lines are express ed in terms of UTC time. 2010-04-10 04:12:05.294 User-specified agent parameter values: -Publisher SUN -PublisherDB PRIMROSE -PublisherSecurityMode 1 -Publication PRIMROSE -Distributor SUN -DistributorSecurityMode 1 -Subscriber PVILLE\SQLEXPRESS -SubscriberSecurityMode 1 -SubscriberDB PRIMROSE -SubscriptionType 1 -DistributorLogin sa -DistributorPassword ********** -DistributorSecurityMode 0 -PublisherLogin sa -PublisherPassword ********** -PublisherSecurityMode 0 -SubscriberLogin sa -SubscriberPassword ********** -SubscriberSecurityMode 0 2010-04-10 04:12:05.325 Connecting to Subscriber 'PVILLE\SQLEXPRESS' 2010-04-10 04:12:05.481 Connecting to Distributor 'SUN' 2010-04-10 04:12:05.513 The version of SQL Server running at the Distributor(10. 0.2531.??????????????????) is not compatible with the version of SQL Server runn ing at the Subscriber(9.00.1399.???????L?L?LHL?L?L?L?,?). 2010-04-10 04:12:05.513 Category:NULL Source: Merge Process Number: -2147200979 Message: The version of SQL Server running at the Distributor(10.0.2531.???????? ??????????) is not compatible with the version of SQL Server running at the Subs criber(9.00.1399.???????L?L?LHL?L?L?L?,?). Any ideas?

    Read the article

  • error to start Windows Media Encoder

    - by George2
    Hello everyone, I am using the following code snippet to run on Windows Server 2003 x64 edition. I met with the following error when invoking encoder.start method. I am using Windows Media Encoder 9. System.Runtime.InteropServices.COMException 0xC00D1B67 My code snippet is below, does anyone have any ideas what is wrong? IWMEncSourceGroup SrcGrp; IWMEncSourceGroupCollection SrcGrpColl; SrcGrpColl = encoder.SourceGroupCollection; SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1"); IWMEncVideoSource2 SrcVid; IWMEncSource SrcAud; SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO); SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO); SrcVid.SetInput("ScreenCap://ScreenCapture1", "", ""); SrcAud.SetInput("Device://Default_Audio_Device", "", ""); // Specify a file object in which to save encoded content. IWMEncFile File = encoder.File; string CurrentFileName = Guid.NewGuid().ToString(); File.LocalFileName = CurrentFileName; CurrentFileName = File.LocalFileName; // Choose a profile from the collection. IWMEncProfileCollection ProColl = encoder.ProfileCollection; IWMEncProfile Pro; for (int i = 0; i < ProColl.Count; i++) { Pro = ProColl.Item(i); if (Pro.Name == "Screen Video/Audio High (CBR)") { SrcGrp.set_Profile(Pro); break; } } encoder.Start(); thanks in advance, George

    Read the article

  • Powershell 4 compatibility with Windows 2008 r2

    - by Acerbity
    In my environment I have a single server that has access to pretty much my entire network. That server is running Windows 2008 r2, and I have upgraded Powershell to version 4.0. The question I have is this... Can I run cmdlets from that machine on other machines that are version 4 specific? For instance, when I am using Powershell, even though it is version 4, it doesn't give me an intellisense autocomplete for "Get-Volume" like it would on a 2012 r2 machine. I understand that it won't run on that machine because the infrastructure won't allow for it, but what about a 2012 r2 machine remotely? I am looking to run batch scripts from there for various purposes.

    Read the article

  • Compiling a C++ application on Windows 7, but execute it on Win2003 Server

    - by dabs
    I have a C++ application (quite complex, multiple projects) in Visual Studio 2008, that produces a single dll. Recently I switched to Windows 7, but had previously been compiling under Windows XP. Suddenly the dll in question cannot be loaded by another application, i.e. on a machine running Windows 2003 Server. I've been trying various things: I've installed the VC9.0 redistributable package on the server Also copied various .dll's from that package to the application folder The project is of course compiled in release mode When I run depends.exe on the client machine, I do get the following error: "Error: The Side-by-Side configuration information for "my_dll.dll" contains errors. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem (14001). Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." and the icon for shlwapi.dll has a red overlay icon. This didn't happen when I was compiling under WinXP, so I'm guessing that there really is no problem with the .dll's on the client machine, but somewhere there is a reference to that particular version of some dll. Does anyone know what would be the best way to resolve this? Regards, Daníel

    Read the article

  • Install Intel USB 3.0 eXtensible Host Controller Driver for Windows Server 2008 R2 x64

    - by ffrugone
    According to Intel and Dell, by board is technically a 'desktop' board and they therefore do not support Intel USB 3.0 eXtensible Host Controller drivers for Windows Server 2008 (R2 x64). I'm trying to find a workaround. I found an entry on someone who tried to tackle this, but I can't make his fix work for me. Below, I have copied both his entry, and my reply. I'm a loyal stackoverflow user, and hopefully the people here at serverfault can help me: anyforumuser Re: GA-Z77X-UD5H USB3 Drivers not installing? « Reply #6 on: July 05, 2012, 04:12:59 am » Thanks to JoeMiner , his process for the network drivers gave me the clues to figure out to get the USB3 drivers working. I have got the intel USB3 drivers working at full speed in win server 2008r2 you have to edit the following file : 1. mup.xml in change the "Windows7" to "W2K8" 2. in setup.if2 under [groups] line starting with "HSCSDRIVER " change the "IsOS( ... )" entry to "IsOS(WIN2008_R2,WIN2008_R2_MAXSP)" inf files for all copy the content of the [Intel.NTAMD64.6.1] group to the [Intel.NTAMD64.6.2] group driver folders. here i am not entirely sure which is correct so there are some double up's. in the drivers folder copy the "Win7" folder to "win2008" , "win2008_r2" and "x64" ie your drivers folder should now contain the "win2008" , "win2008_r2" and "x64" folders and they contain contents of the win7 folder (the inf files should of already been fixed) Run install , It should install properly and work now. You will have to reboot If it doesn't work remove the intel usb3 controllers from device manager and get it to "scan for hardware changes" Good luck !!! benevida Re: GA-Z77X-UD5H Intel Network Drivers not installing? « Reply #7 on: August 13, 2012, 02:21:14 pm » Thank you anyforumuser! A process for getting this driver installed was exactly what I needed. However, I've hit a snag. I believe I've followed every step exactly as written, but I'm getting an error during installation. I get the message "One or more files that are required for installation are either missing or corrupted. Setup will exit." Behind the error, the 'Setup Progress' shows the current step as "Copying File: C:\Program Files (x86)\Intel\Intel(R) USB 3.0 eXtensible Host Controller Driver\Drivers\iusb3xhc.man". I've checked the installation files, and iusb3xhc.man seems to be a viable file in all of the Windows 2008 sub-directories of the Drivers folder. Therefore I don't see how the file could be missing and I doubt that it is corrupted, (although it does NOT exist in the \Drivers\HCSwitch folder). I opened 'Setup.if2', and there are two aspects to the step of copying iusb3xhc.man that caught my eye. First, the steps immediately preceding are set to 'error=ignore'. If they hadn't completed successfully, this is the first step where we'd hear about it. Second, this is the first step where the relative path '%source%\drivers\%_os%\%_ia%\' is used. If I haven't named the Windows 2008 sub-directories correctly, I could see where things are fouling up. In any event, if someone could take a look and make suggestions I'd appreciate it. Thank you.

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Create Windows Bootloader/Boot into Windows from Ubuntu

    - by Kincaid
    I have computer that dual-boots (or tri-boots) Windows 8 Release Preview, Windows 7, and Ubuntu 12.04. Grub boots between Windows 8 and Ubuntu; for which I use primarily. Recently, I have decided I wanted to remove Ubuntu, as I hardly used it. As a stupid mistake, I deleted the Ubuntu partition before changing the bootloader to replace Grub. Whenever I know boot the machine, it gives me the "grub-rescue" prompt -- I am unable to boot into either Windows (8 nor 7), nor Ubuntu (except via USB, of course). I do not have any Windows 7/8 recovery media, so that isn't an option. Please note that after I deleted the Ubuntu partition, I put the PC into hibernate, and then turned it on. This means the C:\ [Windows 8] drive cannot be mounted. I don't know if that is bad, but it definitely doesn't make things better. I am currently booting Ubuntu via USB, in an effort to restore the Windows bootloader solutions. I have looked into using boot-repair to solve the problem using the instructions here, although after attempting to apply the changes, it gave the error: "Please install the [mbr] packages. Then try again." I don't know why I'm getting this error; is there a way to install the 'mbr packages?' I honestly don't know what exactly they are, nor how to install them. Is there any options I have not yet exhausted to be able to boot back into Windows, in the case that there is a better way? In the end, I want to set the bootloader to boot into Windows 8, but booting into either Windows 7 or 8 is fine -- I can use EasyBCD from there. Is there a simple solution to this? I've checked BIOS, and I haven't been able to find a way to boot into Windows. Any help would be greatly appreciated.

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • unable to destroy windows 2008 r2 failover cluster after SAN rebuild

    - by Zack
    I created a windows 2008 r2 failover cluster for a sql 2008 active/passive cluster. This two node cluster was using a SAN device for a quorum disk resource as well as MSDTC resource. Well....I decided to reconfigure the SAN device, but I didn't destroy the cluster first. Now that the quorum disk and mstdc disk are completely gone, the cluster is obviously not working. But, I can't even destroy the cluster and start again. I've tried from the Windows Clustering tool, as well as the command line. I was able to get the cluster service to start using the "/fixquorum" parameter. After doing this I was able to remove the passive node from the cluster, but it wouldn't let me destroy the cluster because the default resource group and msdtc are still attached as resources. I tried to delete these resources from both the GUI tool, as well as command line. It will either freeze for several minutes and crash the program, or once it even BSOD'd the server. Can someone advise on how to destroy this cluster so I can start over?

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Reinstall Acer OEM Windows 8, Windows 8 Recovery for Acer Aspire V5 122p

    - by stwindr
    My Acer Aspire V5-122P-61456G50NSS, model - MS2377, has crashed all together. It came preloaded with Windows 8 and I upgraded to windows 8.1 3-4 days before crash. Unfortunately I did not make any recovery media before the crash. While accessing the eRecovery on Acer store with my PC's serial no. it says nor RCD available for this. I tried recovery by loading recovery manager (Left Alt + F10) Various other advanced startup options (like holding shift key while turning on or pressing F8 key) returns nothing but no luck. However I am able to enter BIOS. After doing research on above condition on various PC forums, now my questions: I read that a 'Windows Recovery Drive' can be made on any PC running Windows 8 and could be used to repair another PC. Does anybody in SuperUser community have that (or a link to download the same from somewhere? as I'm unable to find anybody running windows 8 among my friends). I downloaded a window 8 Pro ISO and made a bootable USB. I was able to go to 'Repair Your Computer' option and after going to 'Reset your PC' option found that my recovery partition has gone/missing. I tried all options available but no luck. Then I tried to install with that Windows 8 Pro ISO but got message: "The product key entered does not match any of the Windows images available for installation. Enter a different product key". before this message I did not got any form to fill product key! Does this mean that the installer was picking up the key from BIOS (OEM Key)? and may be the installation did not succeeded because OEM Windows version was Window 8 and I was trying to install Windows 8 Pro. If that is the case then, could somebody please send me link to download an Windows 8 ISO? I am helpless and couldn't find anywhere on internet (without having to pay for a new key, but I should not pay as the installer will use OEM key).

    Read the article

  • Server with IIS and Apache - how to SSL encrypt Apache with IIS

    - by GAThrawn
    I have a Windows Server 2003 box already setup and working with IIS 6. IIS is set to serve a site out over both HTTP and HTTPS connections using default ports. For various reasons I need to set Apache up on the same server and it needs to serve its pages to end-users as SSL encrypted HTTPS pages. Neither IIS or Apache are (or are ever likely to be) particularly high traffic or high usage. The way I see it there are two possible ways this could be done. Either export the SSL cert from IIS,set it up in Apache and get Apache to server the HTTPS connections itself over a non-default port. Or use IIS to proxy Apache in some way over it's existing SSL security. What is going to end up easiest to setup, configure, maintain and run? Which is going to work best? Has anyone done this sort of thing before? Any tips or things to look out for?

    Read the article

  • Automatic Windows Defender Updates with Manual Windows/Microsoft Updates

    - by wag2639
    I've got Windows/Microsoft Update on my Windows 7 laptop set to notify me when updates are available but not to do anything automatically. I also have Windows Defender running and it seems to have daily or semi-daily updates for its signature database but it uses Windows Update utility to get and install these updates. Is there a way to automatically download and install the Windows Defender signature updates but leave the rest of Windows Updates set to manual?

    Read the article

  • SMTP server (IIS) is running but can't test it with telnet

    - by NitroxDM
    I have a Windows 2003 web edition server that I can't seem to get the SMTP relay working. BT4 shows port 25 open. When I try use telnet to test it on my desktop I get: Connecting To XXX.XXX.XXX.XXX...Could not open connection to the host, on port 25: Connect failed. From the server I get: Microsoft Telnet> o 127.0.0.1 25 Connecting To 127.0.0.1... Connection to host lost. There isn't anything useful in the logs. Any ideas?

    Read the article

  • Multiple VLANs, multiple subnets, single DHCP server?

    - by EightQuarterBit
    Hey guys! At my job we are prepping to transition from multiple LANs connected over slow VPN connections to a single MAN connected over fiber, and I've got a few questions. First of all, we are planning on making each physical site its own VLAN, but we would like to have a single DHCP server at the data center hand out IPs to each VLAN. We've pretty much got the VLAN tagging structure all worked out, but we would like to have our single DHCP server assign different subnets of IPs to each VLAN. For instance, VLAN 2 gets 10.0.2.x through 10.0.4.x, VLAN 3 gets 10.0.5.x through 10.0.7.x etc. We are an Active Directory based shop and we have a Server 2003 box handling DHCP (though we aren't averse to upgrading it to server 2008.) Is this feasible, or am I pipe-dreaming?

    Read the article

  • Bind DHCP Server to Network Bridge

    - by Luke
    My wireless router died, so I decided to route everything through my server. So I installed a second NIC and a wireless card to be my new network: 1 NIC to the Modem, 1 NIC to the switch, and the Wireless to... Well, wireless. Anyways, I got far enough to get DHCP to work on just ONE adapter when I used Internet Connection Sharing (I couldn't get RRAS set up for the life of me), then I decided to try bridging the wireless and second NIC. Now, the DHCP server won't bind to the bridge, but I can enter manual IP's in my clients and it'll connect to the Internet. I also tried changing my wireless adapter's IP to 192.168.0.2, and to 192.168.1.1 to try to set up a separate scope, but to no avail. Running Windows Server 2003

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >