Search Results

Search found 14739 results on 590 pages for 'mssql 2008'.

Page 206/590 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • using wbadmin to backup and recover

    - by g7rpo
    HI I am using wbadmin to perform backups of a specific folder, primarily to backup my VHD files this is working fine but I tried to recover the files today using a different machine to the one which created the backup and couldnt get the machine doing the recovery to 'see' the backups. Is there a way to do this as my worry is that if I have a failure on the host which is perfmorming the backups I need to be able to install hyper-v on another host and recover the backed up VMs to there until I can rebuild the host. It appears that this isnt possible, I am hoping I am missing something. Any help would be greatly appreciated.

    Read the article

  • Remote App - linked exe

    - by StaWho
    I'm experiencing below problem: I'm trying to run Microsoft NAV as remote app. There are two exe file involved: finsql.exe - main executable and finhlink.exe. Later one is used to directly run a 'window' within NAV (it takes certain link as parameter). This functionality is not present in finsql.exe. After configuring and running finhlink.exe as remote app I get an error "...finsql.exe can't be executed...". I believe it's because finhlink.exe is in fact invoking finsql.exe. Is there a way of allowing invocation of linked executable via remote app?

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • Cross join problem query

    - by user66121
    i have following table structure HUB_DETAILS (Master) Branch_ID Branch_Name VTRCheckList (Master) CLid CLName VTRCheckListDetails (Detail) CLid Branch_ID VTRValue vtrRespDate Actually when i run the following query it does comes with all the Checklist names alongwith all branch names but shows the value in every branch infact only 1 branch has data in the given date criteria. it should show 0 if there is no data in checklist of the respective branch. SELECT VTRCheckList.CLName, Hub_Details.BranchName, sum(cast(VTRCheckListDetails.VtrValue as int)) as 'Total' FROM VTRCheckListDetails INNER JOIN VTRCheckList ON VTRCheckListDetails.CLid = VTRCheckList.CLid CROSS JOIN Hub_Details where Convert(date,VTRCheckListDetails.vtrRespDate, 105) >= convert(date,'01-01-2011',105) and Convert(date, VTRCheckListDetails.vtrRespDate, 105) <= convert(date,'30-01-2011',105) GROUP BY VTRCheckList.CLName, Hub_Details.BranchName

    Read the article

  • File server share access intermittent/slow/machine unstable: win2kr2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • How can i use one Domain Controller to manage 3 separate small firms

    - by Plamen Jordanov
    currently we have one Domain Controller that have 15 users and cup off services(hMailServer, IIS, DNS, Active Directory). Now the owners of the firm created two new firms which computers and networks are my responsibility. Now i wonder how exactly to join users in existing domain. Did you think that is a good idea to just include all computers and user from all firms under one domain or there is another solution ? Did some of you run into this kind of situation and what did you do ? ---Edit--- Brent, Dan thank for info guys. For now i will follow Brent advice until we get the new server witch we will virtualize and the old server will be our second DC on different location. Heck we even might think some Pay-as-you-go VPS solution for DC redundancy.

    Read the article

  • WSUS moved how to reset the database and check the folders?

    - by Matthew Zielonka.co.uk
    I have a bit of a problem with a WSUS box, I backed up the WsusContent folder (about 180 gig) to a second machine, wiped the first machine then realized I don't have the database folder! I can not find any articles / guidance on this, I have re installed WSUS however it does not find the other files which where previsously downloaded (I guess the database being the reason). If I do a reset it will force it to download the ones it has again I guess and does not check what is still to be downloaded? Sigh... love / hate WSUS at the moment with a passion :) Thanks for any help.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Problem communicating with one machine in my domain

    - by pmaroun
    Context: 3 HyperV guest images (DC, SQL, MOSS) 1 internal network 1 domain (PJM.COM) DC: 192.168.0.192 SQL: 192.168.0.153 MOSS:192.168.0.160 I am having communication problems from/to the MOSS machine from the other two. I removed the MOSS machine from the domain and cannot rejoin. When I ping the MOSS machine from DC, I get the following response: Pinging MOSS [192.168.0.152] Reply from 192.168.0.192 Destination host unreachable (4 times) When I ping the MOSS machine from SQL, I get the following response: Pinging MOSS [192.168.0.152] Reply from 192.168.0.153 Destination host unreachable (4 times) From the MOSS machine, I can ping the server names, however I cannot ping the FQDN. When I ping from the DC and SQL machines, I get IPv4 addresses. When I ping from the MOSS machine, I get IPv6 addresses. I'm a developer and don't know what steps to take to resolve this issue. Please help!?

    Read the article

  • Windows server backup fails at 40%

    - by Abraham Borbujo
    I´m configuring windows server backup as full system backup. It starts fine, but when it´s making system drive (c:) backup it stops at 40% every time i try. It only backups 7.28 GB of the total 18.19 GB. I tried changing destination drive and also checking c: filesystem in order to find any problem, but it seems to be ok and the problem is still the same. I got a message telling that the backup is completed with warnings. The warning details says that it didn´t complete backup because of input/output error in source or destination. Thanks for your help.

    Read the article

  • How could I determine which SMB client/session has a specific file open on a Server 2008R2 Windows file server?

    - by Rasmir
    What I need a way to associate a client name or IP address with an open file, so that I can cleanly close the file for maintenance. NET SESSION doesn't show the names of open files and NET FILE doesn't show the client which has the file open. I had hoped that I could cross-reference the data from these two commands, but that doesn't seem doable. Everything else I've see provides the same data as these commands, with no apparent way to determine which client machine has the file open.

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • How to change the computer name on a server configured by Puppet

    - by David Sulpy
    I am new to Puppet and I'm trying to get Puppet to configure my EC2 instances after they're started from a Cloud Formation Template in AWS. The problem is that all the nodes that get started from the Cloud Formation Template all have the same name (the name from the AMI that the new nodes derive from). I would love to find a way to have puppet rename the nodes when the nodes start up. (although, as far as I know, a Computer Name change requires reboot, a separate issue...) If you can point me to some documentation that can help me figure this out or if you have any ideas that would be great. My ultimate goal is to have each EC2 start with a unique name so that I can use New Relic server monitoring to report the different servers.

    Read the article

  • SQL Server Reporting authentication not working

    - by Keith
    I'm not exactly sure what went wrong but our SQL Server Reporting Services authentication is no longer working correctly. When I try to load the site, it asks for a username and password, and mine doesn't work. I checked the service and it is using the NT AUTHORITY\NetworkService to logon. Since it is using NetworkService to logon, I read on Microsoft's site that I need to use these settings in the RSReportServer.config file: <AuthenticationTypes> <RSWindowsNegotiate /> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> Which is what I have set. It still asks for the password. When I set the Authentication to RSWindowsNTLM, it does login but everytime I click on a link, it asks for a password (the password doesn't seem to prevent anything from loading). Anyone know what is going on here? I'm not an expert to SQL Server so I may be missing something.

    Read the article

  • Existing tables with binaries to use filestream

    - by user1098487
    I've got a few tables for which I want to use filestream storage. These tables already contain binary data and have rowguids. However at the time they were were created, the tables were not added to a filestream enabled filegroup. What is the best way to have these tables use filestream at this point? Do I need to drop + recreate the tables and migrate the data? Is there an easier way? The database already has filestream enabled and there are other tables which are using them.

    Read the article

  • Using robocopy and excluding multiple directories

    - by GorrillaMcD
    I'm trying to copy some directories from a server before I restore from backup (my latest backup was corrupt, so I have to use an older one :( ). I'm in the Windows Recovery Environment and have access to the server's file system G:\ and my backup media C:\. But, since I'm more familiar with Linux, I'm having a bit of trouble with the command line in Windows, specifically robocopy. I want to copy multiple directories (maintaining the same directory structure) from G:\ to C:\ while excluding others (namely, the Windows and Program Files folders). I can't figure out the syntax for the /XD option. I was hoping to do something like: robocopy G: C:\backup /CREATE /XD "dir1","dir2", ...

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

  • WSUS - Auto-approve only "Needed" updates

    - by Jonathan Rioux
    I'v looked through all the settings in the Automatic Approval menu, but it could not find anything about automatically approve only the needed updates. Because if I check, for instance, to auto-approve only the "Definition updates", it will approve any Definition updates, whether they are needed by my workstations or not. This is because I dont want my WSUS server to download and store updates that are not needed by any of my workstations. Also we are a lazy SMB, and we dont want to waste time to manually approve updates and stuff. Is this even possible ?

    Read the article

  • How to make sure you server NIC performance is at best on Windows?

    - by Bobb
    I realised that I followed some obscure paper on setting NICs on Windows for too long. It might be outdated with new hardware released in past couple of years and with W2008R2. I read a bit about offloading and RSS settings on Windows and I realiased that it is very much circumstantial. Noone can really say - enable that and disable this. etc. So what I really want is for my next server try and setup testing environment and measure how my particular application will behave with different settings. The target is going to be latency of TCP primarily. Please note I am talking about latency inside the box. Are there precision tools for Windows to measure latency (down to microseconds)? P.S. I know this is not easy question. Windows time drift is awful problem for any precision test but still I am sure I am not the fist person to need that... Please share your experience

    Read the article

  • Adding Internal DNS server in Host file

    - by Param
    I have added Global DNS server ip address to one of my Desktop ( please see the Network configuration screenshot ). and after that i have added my both domain controller ip address in host file, and it is working fine. ( please see the below screen-shot for your reference ) Can you please guide, what problem can i face if i kept my configuration in this way. but i am wondering, can this setting can create a problem? because the computer will be able to reach corp.abc.com easily, with the help of host file.

    Read the article

  • Automate new AD user's home folder creation and permission setup

    - by vn.
    I know if we setup a base folder or a profile path in the Profile tab of an AD user, we can copy it and the folder creation and permission setup will be automated. My problem is that not all my users have a roaming profile and the home folder linking is done thru GPO. When I copy from these users, the home folder isn't created automatically and I have to create it manually and change permission and ownership on that folder, located on the fileserver. What should I do? A script may be nice but it'd have to be run everytime a new user is created and I don't think we can link a script to an AD user creation? I'd like to avoid any manual steps and keep my GPO that way. Using a W2008r2 DC on w7 client boxes. Thanks.

    Read the article

  • Hyper-V VM's cannot access Host resources, and vice-versa

    - by Agent
    I have several Hyper-V vm's running on this Win2008 R2 Server box, and up until a reboot of the host server, all the VM's were able to access shared folders on the host. Now, they can't even ping the host server. From what I've seen, I need to setup an Internal only network through Virtual Network Manager in Hyper-V. I set this up, then tried to enable the Microsoft Virtual Network Switch Protocol option in this Internal Only NIC, but I get popups saying: Your current selection will also disable the following features: Microsoft virtual network switch protocol Which is absolutely stupid, considering the protocol is what I'm ticking the checkbox to Enable! As of now, on the host, I have 2 NICs: Physical - This NIC on the host machine does have the MVNS protocol enabled Virtual Network Adapter - Created through Hyper-V Virtual Network Manager as an External type of network. Trying to enable MVNS on this NIC also produces the error above. I've tried enabling Client for Microsoft Networks on the physical NIC for IPv6, but everytime I do that, all the VMs lose Internet connectivity and I cannot RDP into them. Anything else I can try?

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • Easy way to reload multiple applications under a single IIS Website after AppPool Recycle?

    - by MadBurn
    I'm not sure where to begin or even if my thinking is in the right direction. Hopefully someone here can tell me what to do or at least give me a direction to start travelling. I work on a Intranet Website, that contains multiple MVC3 and Coldfusion Applications. I have set the AppPool to Recycle every morning at 2:00 AM. Now, I would like to create a Scheduled Task to reload every application contained under that IIS Website so that when the first user comes in in the morning, they don't have to wait 30 seconds to 2minutes for their application to be reloaded into the IIS AppPool. Is there an easy to to do this? As I see it my only options are: Writing a bash script, inserting each website manually to load Writing a program that would try to find every application and load them Now, if there those are my only options, is there possibly a .NET Library I can tap into that would allow me to easily find the MVC3 Applications under IIS?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >