Search Results

Search found 93962 results on 3759 pages for 'server configuration'.

Page 366/3759 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • Trying to backup system state on server 2003 sp2, getting "Faulting application vssvc.exe - system state backup failed" in applciation log

    - by IT_Fixr
    Trying to backup system state on Windows Server 2003 (SP2), getting "Faulting application vssvc.exe - system state backup failed" in application log. Volume shadow copy creation: Attempt 1. "MSDEWriter" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Event Log Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Registry Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "COM+ REGDB Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Removable Storage Manager" has reported an error 0x0. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f2 Aborting Backup.

    Read the article

  • Does visual source safe take .cvsignore as configuration ?

    - by superuser
    An easy way to tell CVS to ignore these directories is to create a file named .cvsignore (note the leading period) in your top-level source directory Has anyone verified this with vss? Plus,does vss have these similar command lines: * To refresh the state of your source code to that stored in the the source repository, go to your project source directory, and execute cvs update -dP. * When you create a new subdirectory in the source code hierarchy, register it in CVS with a command like cvs add {subdirname}. * When you first create a new source code file, navigate to the directory that contains it, and register the new file with a command like cvs add {filename}. * If you no longer need a particular source code file, navigate to the containing directory and remove the file. Then, deregister it in CVS with a command like cvs remove {filename}. * While you are creating, modifying, and deleting source files, changes are not yet reflected in the server repository. To save your changes in their current state, go to the project source directory and execute cvs commit. You will be asked to write a brief description of the changes you have just completed, which will be stored with the new version of any updated source file.

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Alternative method of viewing a database diagram in SQL Server to see what tables have gone missing?

    - by Triynko
    I have a database diagram for my database, but when I open it in SQL Server, I almost immediately get a message saying some permissions changed or tables in the diagram were dropped or renamed, and tables in the diagram vanish before I can even scroll over to see what or where they were. Basically, it's saying, "Hey, you know all that time you spent laying out tables in this diagram... half of them are going to vanish when you view it, and I'm not going to tell you which tables vanished or where they were in the diagram. You're just going to see a bunch of random empty spaces where tables used to be ;)" Ridiculous. So I thought that maybe if I look in the dbo.sysdiagrams table, I could look at some plain text definition of the diagram to get a clue about the names of the tables that went missing (because thier names were probably only changed slightly) or their coordinates in the diagram (because their spatial location would give me a clue as to what they were), so that I could re-add them, but I can't, because it's a binary definition. So, is there some other program I could use to view the existing database diagram that's not going to just drop and forget the missing tables without telling me what they were, or is this information lost and at the mercy of some SSMS-proprietary database diagram format and viewer which refuses to cooperate with me.

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • Update transaction in SQL Server 2008 R2 from ASP.Net not working

    - by Amarus
    Hello! Even though I've been a stalker here for ages, this is the first post I'm making. Hopefully, it won't end here and more optimistically future posts might actually be me trying to give a hand to someone else, I do owe this community that much and more. Now, what I'm trying to do is simple and most probably the reason behind it not working is my own stupidity. However, I'm stumped here. I'm working on an ASP.Net website that interacts with an SQL Server 2008 R2 database. So far everything has been going okay but updating a row (or more) just won't work. I even tried copying and pasting code from this site and others but it's always the same thing. In short: No exception or errors are shown when the update command executes (it even gives the correct count of affected rows) but no changes are actually made on the database. Here's a simplified version of my code (the original had more commands and tons of parameters each, but even when it's like this it doesn't work): protected void btSubmit_Click(object sender, EventArgs e) { using (SqlConnection connection = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString)) { string commandString = "UPDATE [impoundLotAlpha].[dbo].[Vehicle]" + "SET [VehicleMake] = @VehicleMake" + " WHERE [ComplaintID] = @ComplaintID"; using (SqlCommand command = new SqlCommand(commandString, connection)) { SqlTransaction transaction = null; try { command.Connection.Open(); transaction = connection.BeginTransaction(IsolationLevel.Serializable); command.Transaction = transaction; SqlParameter complaintID = new SqlParameter("@complaintID", SqlDbType.Int); complaintID.Value = HttpContext.Current.Request.QueryString["complaintID"]; command.Parameters.Add(complaintID); SqlParameter VehicleMake = new SqlParameter("@VehicleMake", SqlDbType.VarChar, 20); VehicleMake.Value = tbVehicleMake.Text; command.Parameters.Add(VehicleMake); command.ExecuteNonQuery(); transaction.Commit(); } catch { transaction.Rollback(); throw; } finally { connection.Close(); } } } } I've tried this with the "SqlTransaction" stuff and without it and nothing changes. Also, since I'm doing multiple updates at once, I want to have them act as a single transaction. I've found that it can be either done like this or by use of the classes included in the System.Transactions namespace (CommittableTransaction, TransactionScope...). I tried all I could find but didn't get any different results. The connection string in web.config is as follows: <connectionStrings> <add name="ApplicationServices" connectionString="Data Source=localhost;Initial Catalog=ImpoundLotAlpha;Integrated Security=True" providerName="System.Data.SqlClient"/> </connectionStrings> So, tldr; version: What is the mistake that I did with that record update attempt? (Figured it out, check below if you're having a similar issue.) What is the best method to gather multiple update commands as a single transaction? Thanks in advance for any kind of help and/or suggestions! Edit: It seems that I was lacking some sleep yesterday cause this time it only took me 5 minutes to figure out my mistake. Apparently the update was working properly but I failed to notice that the textbox values were being overwritten in Page_Load. For some reason I had this part commented: if (IsPostBack) return; The second part of the question still stands. But should I post this as an answer to my own question or keep it like this?

    Read the article

  • SQL Error (2003): Can't connect to MySQL server on 'X.X.X.X.' (10051) - What does this error mean?

    - by BeeS
    I get following error when i try to connect via "HeidiSQL" to my database server (local network) SQL Error (2003): Can't connect to MySQL server on 'X.X.X.X.' (10051) SSH Connection via Putty works fine. I checked the my.cnf file on the server (Ubuntu), but settings like bind_address are correct. Is it possible that my wireless modem (SpeedTouch) makes this trouble? (Because my provider changed the download speed) !? Thank you very much for your help!

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • Can't get PXE boot working on WDS + Linux DHCP server

    - by askvictor
    Hi I'm trying to get WDS to PXE boot some laptops. The university is running a (linux) DHCP server for the entire network; we can't run our own. We can (and have) set options in that DHCP server to point to a tftp server and file (pxeboot.n12 on the WDS server we run). The client seems to get the pxeboot.n12 file (judging by the server logs), but then comes back with 'TFTP download failed Pres any key to reboot'. I've tried running a third-party tftp service on the WDS server, and found that the client pulls the first stage (pxeboot.n12) correctly, but then looks for the second stage (bootmgr.exe) in the root of the tftp folder - whereas WDS places it in \boot\x86\ (or \boot\x64). Similarly, it looks for a file called BCD, which doesn't exist (rather there are a few files scattered around with the extension .bcd). I'm confused if the WDS tftp server does some magic returning certain files, or if I haven't configured it correctly. The server is 2008 R2. Cheers, Victor

    Read the article

  • Why can I not get a WDS-originated PXE boot to progress past the first file download?

    - by Jeff Shattock
    I'm trying to work out an automated Windows install process, and thought I'd give WDS a look. After some promising initial progress, I seem to have hit a wall. I imported the boot and install WIMs, and created the capture WIM successfully. However, whenever I try to PXE boot the reference machine against the WDS server, it kinda craps out. It finds the server and downloads WDSNBP.COM successfully, and then gives the message "TFTP download failed." According to WireShark, the only communication between the WDS box and the client box is the successful TFTP request and download of boot\x86\WDSNBP.COM. No further requests are sent. The WDS log on the server shows the same thing, one successful download and no more activity. I've tried every combination of the following, with exactly zero change in behaviour: Win Server 2008R2 vs 2012 vs 2012R2 WDS virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Client virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Every network adaptor type offered by the virtualization platforms. "Actual" network vs isolated, virtual network. MS DHCP server vs Linux isc-dhcp-server Joined to a domain vs Stand-alone I tried changing the boot filename in DHCP to pxeboot.com instead, and it has no problem downloading that file instead, but it then crabs about Boot\BCD being corrupted. Also, with 2012, it doesnt appear that WDSNBP.com does the architecture detection, or at least does'nt report that it did. 2008 reports that it found x64, and then errors. I find myself out of things to check, and I dont see anything immediately wrong. Where do I go from here? WDS server is at 192.168.1.50, DHCP/DNS at 192.168.1.7. Console of the client computer after the boot: MAC: 52:54:00:28:94:0E UUID: blah blah Searching for server (DHCP)..... Me: 192.168.1.155, DHCP: 192.168.1.7, Gateway 192.168.1.1 Loading 192.168.1.50:boot\x86\wdsnbp.com ...(PXE).................done Downloaded WDSNCP... TFPT download failed Interesting parts of /etc/dhcp/dhcpd.conf on the Linux DHCP server: allow booting; allow bootp; option option-60 code 60 = string; option option-66 code 66 = string; option option-67 code 67 = string; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.110 192.168.1.253; next-server 192.168.1.50; option tftp-server-name "192.168.1.50"; option option-60 "PXEClient"; filename "boot\\x86\\wdsnbp.com"; option bootfile-name "boot\\x86\\wdsnbp.com"; }

    Read the article

  • VPN, Tunneling to hide 'real IP' through my proxy server while showing the client IP on 'real' server side

    - by mickula
    I would like to hide my 'main server' behind the load balancer, call it 'proxy server' Although I use some closed-source software on 'main server' and it needs the client IP address to operate well. When I'm setting up some VPN connection, in that software it displays the IP address of my 'proxy server'. Is there any option to set up such tunneling or vpn to: not reveal IP of 'main server' show the IP of 'client' in 'application' on 'main server' I will be grateful for all your replies and ideas.

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user’s mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • Can you run Snow Leopard Server on a VM?

    - by Crash893
    I'm thinking of buying a mac mini server for my email needs (small office server for 999 with all the features i want build in) I wanted to give it a test drive before i commit so i was thinking "hey this sounds like the perfect job for a vm" however i havent been able to see any examples of running snow lepoard server on a vm does anyone know if its possible and/or there is a premade vm package to test it out with?

    Read the article

  • Flush all messages in mailbox from Zimbra to another server

    - by Giovanni Lovato
    I have a primary Dovecot + Postfix mail server and a secondary Zimbra 8.0.1 server. The primary server went down for a week and all the incoming messages were delivered to the secondary server which has configured a "catch all" account. Now that the primary server is back online, I'd like to flush all messages on the "catch all" mailbox to the primary server for appropriate delivery to the corresponding user mailbox (and its own rules). Is that possible?

    Read the article

  • Cannot connect to server via SSH

    - by Rayne
    I'm running RHEL 6.0, and I accidentally moved the /bin, /boot, /cgroup, console.txt, /data, /dev, /etc to another folder. I think I managed to move these folders back, but now I'm having trouble connecting to the server using SSH, but am able to access the server via VNC. When I tried to connect to the server using a terminal from another server, I get the error ssh_exchange_identification: Connection closed by remote host I'm currently still connected via SSH to the server (haven't closed the window yet), and am still able to access it normally. But if I try to open a new SSH terminal from my current session, I see /bin/bash: Permission denied If I try to open a new SSH File Transfer window from my current session, I get the error File transfer server could not be started or it exited unexpectedly. Exit value 0 was returned. Most likely the sftp-server is not in the path of the user on the server-side I checked and I have Subsystem sftp /usr/libexec/openssh/sftp-server which is the same path as the output of locate sftp-server Also, when I tried to restart sshd, I get the error Couldn't open /dev/null: Permission denied But my /dev/null has the permissions crw-rw-rw- for root,root. How can I resolve this? ETA: Thanks for all your help! I was able to start ssh by running the application directly /usr/sbin/sshd Even though the status of the openssh-daemon is still "stopped".

    Read the article

  • Steps after installing vCenter Server?

    - by goober
    I'm working with: Two new ESX servers that I'm configuring A new Server 2008 R2 machine that I'm using for vCenter. I took the following steps: Installed the Hypervisor on the 2 ESX machines Checked their setup/connectivity (appears to be fine; can ping, etc.) Installed vCenter Server on the Win2k8R2 box. This included the install of a SQL Express database (we're a small shop) FYI, I changed some of the ports (443 -- 8443, 80 --8080, etc.) Installed vCenter Web Client Server on the Win2k8R2 box Problems my vSphere Client on my Desktop fails to connect. Part of this is that it asks me for a username and password, but I don't recall specifying one when I set up the install. I receive the error "vSphere Client could not connect to [machinename]. An unknown connection error occurred. (The request failed because of a connection failure. (Unable to connect to the remote server))" I have also tried to use local machine admin credentials, including the format machinename\localuseracct. I have also tried using my domain credentials which are an admin for that box. I have also checked and the service is running. I also tried to connect via vSphere client locally installed on the server. It translates "localhost" to the correct name but gives the same error. I cannot register the vCenter server from the vCenter Web Client Server. I'm not sure if this is necessary, as they're both on the same machine, but it seems like the logical next step. I also receive a "failed to connect" error in this case as well. FYI, both the vCenter server and the vCenter Web Client Server are installed on the same Win2k8R2 server. What am I missing here? What is the best way to test in this case?

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >