Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 329/389 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • IPv6 Addresses causing Exchange Relay whitelists to fail

    - by makerofthings7
    Several of our new Exchange servers are failing to relay messages because it is communicating over IPv6 and not matching any receive connector I previously set up. I'm not sure how we are using IP6 since we only have a IPv4 network and we are routing across subnets. I discovered this by typing helo in from the source to the server that is confused by my IP6 address. I saw the IPv6 message and the custom message I gave this receive connector. (connectors with more permission have a different helo) 220 HUB01 client helo asdf 250 HUB01.nfp.com Hello [fe80::cd8:6087:7b1e:99d4%11] More info about my environment: I have two dedicated Exchange forests each with a distinct purpose. They have no trust and only communicate by SMTP. They both share the same DNS infrastructure via stub zones. What are my options? This is my guess, but I'm no IPv6 expert so I don't know which one is the best option Disable IPv6 Add the IPv6 address to the whitelist (isn't that IP dynamic?) Tell Exchange to use IPv4 instead Figure out why we are using IPv6 instead of IP4

    Read the article

  • IIS not responding with SSL Server Hello

    - by Damien_The_Unbeliever
    I'm having difficulty getting a particular IIS machine to "do" SSL. This is a test environment (one of many) which we've set up "the same" as we have many times previously, but it just doesn't seem to be working. The server is Windows Server 2003 (Version 5.2 (Build 3790.srv03_sp2_gdr.100216-1301 : Service Pack 2)) IIS is hosting 4 sites (including the default site), but only one site is configured for SSL. We're using the same SSL certificate we use on all of our other servers (it's a wildcard cert). (Don't think this is relevant, but including anyway) We've configured the site to require SSL, it has a subdirectory that doesn't require SSL but has an asp page that redirects into SSL. The 403;4 error page for the site is hooked up to this asp page (this is how we do our non-HTTPS into HTTPS transition). Using Microsoft Network Monitor (3.3), I've just run a session against a server where SSL is working. It can pull apart the SSL exchange as the following messages: SSL: Client Hello SSL: Server Hello. Certificate. Server Hello Done SSL: Client Key Exchange. Change Cipher Spec. Encrypted Handshake Message. SSL: Change Cipher Spec. Encrypted Handshake Message However, on our problem server, I only see: SSL: Client Hello. The next packet from the server (from port 443, so it's listening and responding there) contains only 60 bytes, and just seems to have the Tcp headers and not much else (but I'm by no means an expert at deciphering these things). So, where do I look next? Or what additional information do I need to add to this question, and where do I find it?

    Read the article

  • Cannot Connect to VSFTP outside of network

    - by jnolte
    I am having a hair pulling issue with my VSFTPD. I am not sure where to turn and I have went through to make sure everything is working properly and when trying to connect to ftp using ftp localhost I am able to login with the username and password I have specified. When I try to connect from outside I get the prompt Connected to domainname.com. but no prompt for user and password in addition when using an FTP client it hangs up and never connects. The server is running Ubuntu 12.04 LTS and VSFTPD 2.3.5 Here is the output of running iptables -L : http://pastie.org/4892233 Here is he output when running ps -FC vsftpd : root 14343 1 0 1168 984 3 16:55 ? 00:00:00 /usr/sbin/vsftpd Here is output of running netstat -tlpn | grep vsftpd : tcp6 0 0 :::21 :::* LISTEN 14343/vsftpd I have uninstalled and reinstalled many times and tried several different configurations and am at a complete loss on why this may not be working. We very often use the same configuration on the same type of servers with no issues. Thank you in advance for your help.

    Read the article

  • Setting up virtualbox for outside access

    - by Morgan Green
    I have a computer running a server that my subdomain on my shared hosting account points to. IE subdomain.mydomain.org goes to my home server. Now then; what I'm wanting to do is be able to access my VirtualBox servers through that subdomain and a different port. E.G Ubuntu Virtual Box Server 1 Username:Ubuntuhost1 Password:MyUbuntuHost1 Port:4000 Internal IP: 192.168.1.60 External IP: 24.29.138.45 Ubuntu Virtual Box Server 2 Username:UbuntuHost2 Password:MyUbuntuHost2 Port:4001 Internal IP: 192.168.1.61 External IP: 24.29.138.45 Now I want to be able to access RDP number 1 through Port 4000, but if I access Port 4001 it will connect to the server on port 4001; both using the same subdomain. The next issue is the fact that even though I know what the IP addresses are on the router for the virtualbox hosts through ifconfig it doesn't change the fact that they don't show up on the router. If anyone knows how to configure this to work please help me out because I've been racking my brain to the highest extent I can. Alright; here's an edit to clarify more; Sorry. My ports on the router are edited to forward Port 4000 on Internal IP 192.168.1.63 (My Ubuntu Internal IP address) Now when I go to my Router Home Page my VirtualBox Internal IP Address doesn't show on the attached device listings, so I set up port forwarding anyways to the VirtualBox Internal IP. My end goal is when I connect to mydomain.org and I connect through port 3389 it takes me to my host computers server, but if I put in mydomain.org and go through port 4000 it's going to redirect to my VirtualBox server; Is this even possible? Sorry; I'm trying to clarify the most I think I can I just don't know how else to explain my issue.

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • CentOS: Init scripts failing to start for some unknown reason

    - by user705142
    I'm running CentOS 6.2 - I've just migrated some applications over to a failover server, and copied their init scripts into /etc/init.d. I've made them executable, added them to chkconfig, with chkconfig -add, set their levels, made sure they're residing in /etc/rc.d/ - made sure I can execute them from rc2.d etc. The permissions are the same on both servers. They're also running in the same order as on the primary server Yet on reboot they don't start. Any ideas? The offenders are these: jetty 0:off 1:off 2:on 3:on 4:on 5:on 6:off smart 0:off 1:off 2:on 3:on 4:on 5:on 6:off /etc/init.d: -rwxr-xr-x. 1 root root 14456 Mar 13 20:21 jetty -rwxrwxrwx. 1 root root 5829 Mar 29 09:58 smart /etc/rc.d/rc3.d lrwxrwxrwx. 1 root root 15 Mar 29 19:21 S99jetty -> ../init.d/jetty lrwxrwxrwx. 1 root root 11 Mar 26 17:12 S99local -> ../rc.local lrwxrwxrwx. 1 root root 15 Mar 29 19:21 S99smart -> ../init.d/smart I've checked, and I'm in run level 3. I've checked their logs, and there's no indication that that they've been started. I can start them manually easily - and other services are starting normally. I'm completely out of ideas really.

    Read the article

  • Run a script on user connection on the VM host

    - by Scott Chamberlain
    I have a server running a Virtual Desktop Managed Pool, what I would like to do is when a user logs in I would like a script to check the number of available VMs and if below a threashold add additional VMs to the pool. The script to check the load and add to the pool is not the problem, I have that already figured out: $collectionName = "Test1"; $rdvh = "vmHost.example.com"; $minAvailableVMs = 2; Import-Module RemoteDesktop; $pool = Get-VirtualDesktopCollection -CollectionName $collectionName; $availableVMs = $pool.Size - ($pool.Size * $pool.PercentInUse / 100); $status = Get-VirtualDesktopCollectionJobStatus $collectionName #only add new servers if we are below the threashold and in the JOB_COMPLETEED state if($availableVMs -lt $minAvailableVMs -and $status.Status -eq [Microsoft.RemoteDesktopServices.Management.VirtualDesktopCollectionJobStatus]::JOB_COMPLETED) { Add-RDVirtualDesktopToCollection -CollectionName $collectionName -VirtualDesktopAllocation @{"$rdvh" = 1} } The problem I am having is, how do I run the above script on the Virtualization Host/Connection Broker/Some other server when a user connects?. I don't think it would be appropriate to run this as a logon script inside the VM, I think there is a way to do this on the management side but I don't know the new scripting interface in Server 2012 R2 well enough to know which commandlets I should look for to schedule this. EDIT: I know System Center is perfect for this but I do not have a license and was denied when I asked for it to be added to the budget.

    Read the article

  • How to know whether mongodb is running on 64 bit mode or 32 bit mode

    - by Jim Thio
    My programmer install mongodb. Then somehow it doesn't work. I run C:\mongod\bin>mongod mongod --help for help and startup options Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 warning: 32-bit servers don't have journaling enabled by def ault. Please use --journal if you want durability. Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 [initandlisten] MongoDB starting : pid=3800 port=27017 dbpat h=/data/db 32-bit host=haryantoi5 Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sat Aug 11 22:57:50 [initandlisten] ** see http://blog.mongodb.org/post/13 7788967/32-bit-limitations Sat Aug 11 22:57:50 [initandlisten] ** with --journal, the limit is lower Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] db version v2.0.7-rc1, pdfile version 4.5 Sat Aug 11 22:57:50 [initandlisten] git version: 9efe4cce272373b52b96de1309c1fbf 0c984305f Sat Aug 11 22:57:50 [initandlisten] build info: windows sys.getwindowsversion(ma jor=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB _VERSION=1_42 Sat Aug 11 22:57:50 [initandlisten] options: {} ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Sat Aug 11 22:57:50 [initandlisten] exception in initAndListen: 12596 old lock f ile, terminating Sat Aug 11 22:57:50 dbexit: Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close listening sockets.. . Sat Aug 11 22:57:50 [initandlisten] shutdown: going to flush diaglog... Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close sockets... Sat Aug 11 22:57:50 [initandlisten] shutdown: waiting for fs preallocator... Sat Aug 11 22:57:50 [initandlisten] shutdown: closing all files... Sat Aug 11 22:57:50 [initandlisten] closeAllFiles() finished Sat Aug 11 22:57:50 dbexit: really exiting now It seems that mongod is running on 32 bit. I have a 64 bit computer and I want to run mongodb in 64 bit enviroment. How do I do so?

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • What methods are available for updating a non-Internet-connected VMWare ESXi host?

    - by romandas
    I have a stand-alone installation of VMWare vSphere Essentials, with a vCenter Server and 3 ESXi 4.0 host servers. The environment is intended to remain as a stand-alone network, with the exception that I can "float" a workstation or server between the 'Net and the VMWare network for patches and maintenance. With other installations, where the Internet is available, I've used the vSphere Host Update utility to connect to VMWare and then apply the patches to the ESXi hosts. My problem is that this utility does not seem to function if it cannot connect to both VMWare and the ESXi host at the same time, as the scan for patches function will not scan the server without connecting to VMWare's site to sync its repository first. Even if I sync it, disconnect from the 'Net and connect to the VMWare network, it still won't scan hosts for required patches -- it will prompt for syncing with VMWare and if you click No to syncing, the scan does not occur. Does anyone know of other options for updating the ESXi hosts in some automated fashion? I believe I can manually pull down required patches and apply them, but this will not scale well, and in the future I'm sure I'll want something a bit more scalable.

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • Three server processes consume no more than 50% of Dual Core CPU

    - by thor
    I have three processes running on Intel Core 2 Duo CPU. From watching output of 'top' and graphs of CPU load (drawn by MRTG, data collection via SNMP) I can see that CPU load is never more than 50%, and, most of the day, when those processes are busy CPU load has a ceiling at 50 %. I mean, CPU load grows up to 50% in the morning and stays there until late evening. My first thought was that only one core was used at 100% thus giving 50% of both CPUs. But, as there are three processes running and from 'top' I see that both cores are being loaded, so this is not the case. schedtool shows that CPU affinity for those three processes is at default, 0x03, allowing them to use both cores. If I force one process to one core (schedtool -a 0x01), and two others to second (schedtool -a 0x02), cumulative usage grows beyond 50%. Why three processes seem to consume only 50% of two cores? Why forcing them to different CPUs allows usage to grow higher? Any hints? P.S. Processes in question are Counter-Strike servers.

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Accessing a shared folder in Windows Server 2008 R2.

    - by Triztian
    Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated.

    Read the article

  • vSphere - datastore falling off a host

    - by Chadddada
    Recently we have been running the vCheck powershell script daily in order to help in monitoring our vSphere ESX 4.0 environment. One of the oddities that we have been seeing is that some of the datastores on the SAN don't always show up on every host. Our hosts are connected redundantly, via FC, to some brocade FC switches, which then connect via fiber to our EMC Ax4 SAN. While all the datastores are presented to each host we have, and they see them initially, they sometimes seem to fall off and are no longer visible. It easy enough to rescan for datastores and add them back to the hosts the hosts but this seems to be an error. Has anyone else seen this or know why it may be happening? Responses to questions: 1. Is it always the same ESX servers that lose their connection? – Scott Warren No this happens randomly on random hosts. If a VM is running on a particular host, of which the VM's disks are on a SAN datastore, then that datastore won't disappear. It seems to happen if a host doesn't touch a datastore for a bit and it just forgets about it.

    Read the article

  • MAMP Pro virtual hosts on Mountain Lion not being recognized

    - by user135242
    I'm running MAMP PRO 2.1.1 on OSX 10.8.1 and not having any luck getting Apache to recognize any hosts that have been set up in MAMP besides localhost. This only happens when trying to use port 80. When I switch to port 8888 everything runs fine. Mountain Lion's Apache has been disabled and I get no errors or warnings when starting MAMP and the servers. Any doc root I set for "localhost" runs without problem. Any other server name I define, however, results in "cannot connect" when viewing in Chrome (not to be confused with "cannot find" - the browser is in fact following /etc/hosts to 127.0.0.1, but MAMP's Apache is simply not responding) I was wondering if anyone else has run into this issue or knows how to solve it. I'm working on some WordPress development and it keeps wanting to redirect to the base URL (with no port reference) even during setup. I'm sure I could fix things from the WP side but I'd rather figure out what the root issue is with MAMP. Thanks in advance for any insight you can provide.

    Read the article

  • Failover Cluster Quorum Failing

    - by oruchreis
    Hi, I have two nodes which boots from iscsi to implement windows 2008 cluster. And I'm using disk majority option as quorum over iscsi. But when the quorum's iscsi connection failed(May be san server reset), the failover cluster is failed too. If I reset one of the nodes, it can open, but its system disk goes offline. I cant change its status as online, because it says that its reserved by failover cluster(disk is on iscsi, beacuse iscsi boot). And this disk works as readonly. Anything on it cant be deleted or written. So, I cant rejoin the node to the cluster again. I have to reinstall windows. So, what I'm asking is, how can I implement more quorum backup? I mean, can I use both disk majority and file share majority at same time? AFAIK, every nodes also keep the quorum's copy too. But I don't know sometimes san servers goes offline. And quorum's iscsi connection and nodes' iscsi connections get lost. So, nor the quorum that is kept in the nodes neither the quorum iscsi disk is not enough to start the cluster again. I want to use both disk majority and file share majority at the same time. Can I do this? Have you any other suggestion? Regards.

    Read the article

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • mount.nfs: access denied by server while mounting (Kerberos authentication)

    - by Nick
    There's plenty of references to this error on Goggle, and even a question here with the same title, but it seems that "access denied by server while mounting" is a catch-all error. I've tried suggestions that others have used to fix this problem, but they did not work in my case. I'm trying to set-up a Kerberos-based NFS file server with shared homes for a Linux network. I'm using Ubuntu 11.04 Servers and clients. When trying to mount a share using: mount 192.168.1.115:/export/home/ /media/tmp I get: mount.nfs: access denied by server while mounting 192.168.1.115:/export/home/ This is the same if I mount it from a client machine or from the server itself. On the server, in /var/log/syslog I get: Aug 25 06:22:37 nfs mountd[1580]: authenticated mount request from 192.168.1.115:835 for /export/home (/export/home) Aug 25 06:22:37 nfs mountd[1580]: authenticated unmount request from 192.168.1.115:766 for /export/home (/export/home) Which is odd, since it says it's authenticated the request, not denying it. /etc/exports: /export *(rw,fsid=0,crossmnt,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) /export/home *(rw,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) On client: me@dt1:/$ rpcinfo -p 192.168.1.115 program vers proto port 100000 2 tcp 111 portmapper 100024 1 udp 37320 status 100024 1 tcp 48460 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 100227 3 tcp 2049 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 100227 3 udp 2049 100021 1 udp 58625 nlockmgr 100021 3 udp 58625 nlockmgr 100021 4 udp 58625 nlockmgr 100021 1 tcp 49616 nlockmgr 100021 3 tcp 49616 nlockmgr 100021 4 tcp 49616 nlockmgr 100005 1 udp 45627 mountd 100005 1 tcp 60265 mountd 100005 2 udp 45627 mountd 100005 2 tcp 60265 mountd 100005 3 udp 45627 mountd 100005 3 tcp 60265 mountd Any suggestions I could try?

    Read the article

  • How to prevent asymmetric routing with multiple eBGP routers?

    - by Andy Shinn
    I have 2 routers announcing a /22 subnet to different providers (one providers connects to each of the 2 routers). I have split the /22 in two /23 to announce one /23 on each of the routers plus the /22 (the providers will take the more specific route). This allows me to fail over and keep traffic inside the /23 in and out the same provider. What are other ways in which I could announce just the /22 with both routers and have packets from servers on the network behind the routers go back out the same router in which they came in from? EDIT: The main problem I come across, which end users and clients complain about the most, is that the least hop route is sometimes not the "optimal" route. In my case, I know that Provider B may have better latency to X nation. But when packets come in from provider B, they may go out Provider A or provider B. The reverse is also true. If I send a packet to X nation out provider A, even though it may have more hops back, the packet will likely come in from Provider B (which may have higher latency, packet loss, etc. to this nation)

    Read the article

  • VMWare web UI intermittent access on CentOS

    - by PeteWilliams
    Hiya, I've got a CentOS 5.2 server that I'm trying to get set up as a development environment. As part of this, I planned to install VMWare Server 2 and set up several virtual development servers. I've got as far as installing VMWare Server 2 but access to the remote control panel is only working intermittently. If I access it through Firefox at https://127.0.0.1:8333/ui/# it usually says either: "Connection intterupted: connection was reset before the page loaded" Or "Firefox can't establish a connection to the server at 127.0.0.1" But every now and then it lets me in and I'll manage a few clicks in the web UI before it kicks me out with the following error: "The server could not complete a request (HTTP 0 ). The server encountered an unexpected condition that prevented it from fulfilling the request. If this problem persists, please contact your system administrator." I've done all the updates available in CentOS except one OpenOffice one that is causing a conflict, and I re-ran wmware-config.pl after updating the kernel. Though I went with all the defaults as I don't really know what I'm doing! I've since rebooted and nothing changed. I've also tried accessing the control panel remotely from another machine in the network and the results are the same. Does anyone have any ideas what might be causing this and how I can resolve it? I'm afraid I'm a developer playing at sys-admin, so I may be missing something obvious! Many thanks Pete Update I have now reinstalled both the operating system and VMWare and I'm still getting the same issue. I wonder if it's a result of the settings I'm putting in on the config.pl script..?

    Read the article

  • How to setup port forwarding from my Webserver (apache) to my Database server (mysql)

    - by karman888
    Hello again guys, and thank you for your help so far. Here is my problem: I have two remote dedicated servers, one webserver that runs apache, and one db server that runs mysql. The apache server is visible on the internet of course, but the second server is only visible to the apache server because they are connected with LAN. I need to connect to the remote mysql server through internet from my home-pc , but only apache server is visible to my home-pc. How can i setup port-forwarding from my apache server to the mysql server so i will be able to "see" the mysql server from my home-pc? This question is a follow-up from my first question Connect to remote mysql server from my application. Problem is that Mysql server is on LAN in which you answered me and helped me a lot by telling me to do "port-forwarding". I looked over the internet, and i cant find a good how-to to do port-forwarding. I'm an experienced programmer, but have little experience on hardware and networks. I can understand though what must be done, so i just need a litle help to sort things out :) I hope you can help me guys, Thank you in advance p.s. machine that Apache is running is on CentOS, mysql server also CentOS. p.s2 webserver runs WebHostManager i dont know if that makes any difference or it can be made easily through this, i just mention it :)

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >