Search Results

Search found 4073 results on 163 pages for 'hosts deny'.

Page 137/163 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • System Slow After Uprading Ubuntu

    - by Aragon N
    I have an Ubuntu network machine which has release of 10.04.1 LTS Lucid. On this system I have Apache, PostgreSQL and django. For some app. development I have to install PGP and php-curl. Due to being on network, I have exported a VMware machine to the Internet and firstly I have upgraded the system and then installed php5 packages on it. I don't know is it all about django or apache configuration. Maybe some Apache settings had changed. In this case in apache where I have to look at ? After all replacing it with its old place, I see that the new system query is slow according to another. Old system query time : 140 ms New system query time : 9.11 s I have checked /etc/network interface and it seems there is no problem. I have checked /etc/resolv.conf and it seems OK I have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then I have checked time host -t A services.myapp.com and I got real 0m0.355s user 0m0.010s sys 0m0.020s and I have checked apache2 HostnameLookups : find /etc/apache2/ -type f | xargs grep -i HostnameLookups It returned : /etc/apache2/apache2.conf:# HostnameLookups: Log the names of clients or just their IP addresses /etc/apache2/apache2.conf:HostnameLookups Off and now what can I have to check for boosting my system as before?

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • Windows 2008 R2 RDS - Double Login

    - by colo_joe
    Issue: Double logins when connecting to RemoteApps or Remote Desktop Environment: Gateway = 1 server 2008 R2 - Roles = Gateway, Session Broker, Connection Mgr, Session Host Configuration server Session hosts = 2 servers 2008 R2 - Roles = App Manager and Session host configuration Testing: I can get to the url http://RDS.domain.com/rdweb - I get prompted for authentication (1) Pass authentication, get list of remote apps. Click on remoteapps or remote desktop, get prompted for authentication again (2). Pass authentication, I get access to app or RDP. Done so far. On session host Signed rdp files with cert. Added the following to the custom RDP settings: Authenticaton level:i:0 = If server authentication fails, connect to the computer without warning (Connect and don’t warn me). prompt for credentials on client:i:1 = RDC will prompt for credentials when connecting to a server that does not support server authentication. enablecredsspsupport:i:1 = RDP will use CredSSP, if the operating system supports CredSSP. Edited the javascript file as found in http://support.microsoft.com/kb/977507 Added Connection ID, and added Web Access server to TS Web Access Computers group on the Session host servers, and Signed apps as found in hxxp://blogs.msdn.com/b/rds/archive/2009/08/11/introducing-web-single-sign-on-for-remoteapp-and-desktop-connections.aspx Note: This double login happens internally and externally.

    Read the article

  • User http does not have write permissions directory?

    - by dwieeb
    I have a bit of an odd set up, I think. I have groups for each domain my server hosts, and I add the user http to each domain group along with the users that should have access to the groups' domains. In my php script running from a directory 'public_html', I try creating a file: <?php $output = ""; print exec('touch test 2>&1', $output); But I get touch: cannot touch `test': Permission denied and the file is not created. But here, clearly stated, the group has all permissions on the directory: drwxrwxr-x 5 dwieeb example.com 1024 Feb 4 05:19 public_html And here are the permissions on the php file in public_html that is trying to use the exec function: -rw-rw-r-- 1 dwieeb example.com 59 Feb 4 05:19 test.php How is this possible if http is part of the example.com group (as seen from a cat on /etc/group) and the directory has full permissions for the group? ... example.com:x:1000:dwieeb,http I'm stumped. EDIT (since apparently I'm not cool enough to answer my own questions yet): Ah, I found the problem. Yes, I restarted Nginx, but the php-fpm daemon must be restarted as well when http is added to the group for my domain. On Arch Linux: rc.d restart php-fpm

    Read the article

  • Sendmail doesn't work with iptables, even though smtp and dns are allowed

    - by tom
    I have sendmail installed on Ubuntu 10.04 solely for the use of the php mail() function. This works fine unless iptables is running (I've been using sendmail [email protected] to test this). I think that I have allowed SMTP and DNS (the script I am using to test iptables rules is below, in my version are the actual IPs of my hosts nameservers), but to no avail! iptables --flush iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Postgres iptables -A INPUT -p tcp --dport 5432 -j ACCEPT # Webmin iptables -A INPUT -p tcp --dport 10000 -j ACCEPT # Ping iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT # sendmail iptables -A INPUT -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT # DNS iptables -A INPUT -p udp --sport 53 -s <nameserver1> -j ACCEPT iptables -A INPUT -p udp --sport 53 -s <nameserver2> -j ACCEPT iptables -A INPUT -p tcp --sport 53 -s <nameserver1> -j ACCEPT iptables -A INPUT -p tcp --sport 53 -s <nameserver2> -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -d <nameserver1> -j ACCEPT iptables -A OUTPUT -p udp --dport 53 -d <nameserver2> -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -d <nameserver1> -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -d <nameserver2> -j ACCEPT iptables -A INPUT -j DROP # Add loopback iptables -I INPUT 1 -i lo -j ACCEPT

    Read the article

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • webmin's bind & nameserver - NS doesn't resolve

    - by user127518
    Got a problem setting up a nameserver. Here are the details: domain http://stagingtestserver.com.au/ (I did some updates and now www.stagingtestserver.com.au won't resolve) I got some errors in www.intodns.com/stagingtestserver.com.au as well. I could not ping ns1. and ns2. also. This is the record file under /var/named/stagingtestserver.com.au.hosts: $ttl 38400 stagingtestserver.com.au. IN SOA ns1.stagingtestserver.com.au myemail\.here.gmail.com. ( 1341370630 10800 3600 604800 38400 ) stagingtestserver.com.au. IN A 202.4.229.161 www.stagingtestserver.com.au. IN A 202.4.229.161 ftp.stagingtestserver.com.au. IN A 202.4.229.161 m.stagingtestserver.com.au. IN A 202.4.229.161 localhost.stagingtestserver.com.au. IN A 127.0.0.1 webmail.stagingtestserver.com.au. IN A 202.4.229.161 admin.stagingtestserver.com.au. IN A 202.4.229.161 mail.stagingtestserver.com.au. IN A 202.4.229.161 stagingtestserver.com.au. IN MX 5 mail.stagingtestserver.com.au. ns1.stagingtestserver.com.au. IN A 202.4.229.161 ns2.stagingtestserver.com.au. IN A 202.4.229.172 stagingtestserver.com.au. IN NS ns1.stagingtestserver.com.au. stagingtestserver.com.au. IN NS ns2.stagingtestserver.com.au. Any thoughts, guys? Thanks and I appreciate all your thoughts/help/(ahem violent) reactions? :)

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Migrating Linux user data to Windows profiles automatically

    - by scott ryan
    I have what seems to be an incredibly simple problem with a very simple solution but I'm having some trouble connecting dots. I have an aging server running Ubuntu Server which hosts roaming profiles. I am switching to a Windows Server 2012 DS shortly. Users used to be named firstinitial.lastname and we are switching to firstname.lastname. I need to transfer things like favorites, documents, etc. from the roaming Linux profile to the user's local Windows profile. So, the way I think it'd work is by using a login script. I think I'd use a script to mount the Linux server's /home for each user, then do copy to various paths (documents, pictures, etc.). But, how do I automate this for each user that logs in? I'm working with a nonprofit, so doing this by hand would probably be out of their budget. I'm open to suggestions, though. What I want is basically Windows Easy Migration, but I'm fairly certain that won't work under Wine... (Kidding, I promise). Thanks!

    Read the article

  • IIS 7.5 fails to open database after computer machine on that database server is working restarts.

    - by Jenea
    Hi. I decided to post this question also here in case the issues we have is related to sql server. There is a problem that bother me for some time. I have an asp.net mvc that uses NHibernate for modeling the database. The infrastructure is the following: Windows 2008 R2 for all virtual machines. IIS 7.5 is working on one virtual machine. Sql Server 2008 is working on another virtual machine. We have couple of databases, two that stores application data and one that registers all unhandled exceptions. Sometimes virtual machine that hosts database server restarts (in the middle of the night, not quite sure about the reason) after that connection to the databases that stores application data is not working and as result there are thousands of unhandled exceptions that get registered in the third database. Important to mention that databases are accessible from Management Studio. The problem is solved by resetting IIS. Connetion are handled via NHibernateUtil class which opens and closes session at each request.

    Read the article

  • setting up tracd behind mod_proxy?

    - by FilmJ
    I'm having trouble setting up mod_proxy and tracd. Seems almost all the search results for this problem take me to the built-in trac documentation page that mentions it as an option. I have several VirtualServers already running on the box in question, so running tracd on port 80 or 443 is not an option, but I do want to make my trac server accessible on this machine without exposing an additional port via the firewall. Making things even more complicated is that I have multiple trac repositories being served by the same instance of tracd, and so I want to set it up so: http://trac.abc.com is proxy'd to localhost:8000/projects/abcproject, and http://trac.def.com is proxy'd to localhost:8000/projects/defproject. Currently, the setup I have below results in 100% 403 errors. The server is running as www-data and the directory where all trac files are stored is owned by www-data, AND tracd (as show below) is running as www-data, so not sure where it's getting hung up. The relevant configuration on /var/apache2/sites-enabled/trac.abc.com: ProxyPass / http://localhost:8000/abcproject ProxyPassReverse / http://localhost:8000/abcproject The relevant configuration on /var/apache2/sites-enabled/trac.def.com: ProxyPass / http://localhost:8000/defproject ProxyPassReverse / http://localhost:8000/defproject The command used to instantiate tracd: tracd -a defproject,/var/www/vhosts/trac-common/users.htdigest,DEFProject -a abcproject,/var/www/vhosts/trac-common/users.htdigest,ABCProject -p 8000 -b localhost -e /var/www/vhosts/trac-common/projects If I access the site at http://localhost:8000/ everything works fine, but if I try to access via any of the proxy'd hosts I end up with 403 at every turn. I've used mod_proxy successfully as described above for other servers, such as couchdb, so maybe this has to do with the headers sent by tracd??

    Read the article

  • Lots of strange IP addresses in my Windows Firewall logs. Concern?

    - by gmoore
    Was trying to debug a Samba sharing issue with Mac OS X so I turned on logging for my Windows Firewall. I didn't expect a lot of conections but the thing filled up quickly. Here's a sample: 2009-12-21 08:49:32 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56335 139 - - - - - - - - - 2009-12-21 08:49:33 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56337 139 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.73.242 1389 53 - - - - - - - - - 2009-12-21 08:50:02 CLOSE TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.71.226 60290 53 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:04 CLOSE TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:41 CLOSE UDP 192.168.0.3 192.168.0.4 137 50300 - - - - - - - - - I can pick out the local IP addresses (192.168.0.3 is my Windows XP machine, 192.169.0.4 is Mac OS X) as I debug the Samba issue. But some of the others resolve to Comcast (my ISP) and others resolve to weird hosts like van-dns.com and navisite.net. It doesn't look like any connection sent/received any bytes. I used the reference here: http://technet.microsoft.com/en-us/library/cc758040%28WS.10%29.aspx. Is it a cause for concern?

    Read the article

  • Rails app returns HTTP 422 for new ServerAlias - Internet Explorer only

    - by Snips
    I have a long-standing Rails app running on Mac OS X (apache2). The set-up uses Apache virtual hosts and Passenger. The Rails app also uses HTTP Basic Authentication. I need to migrate the app from one url domain to another - with some overlap of both domain names being accessible simultaneously for a period. To do this, I've added the new domain name as a ServerAlias of the existing domain name in the Passenger Virtual Host config. I can now Browse the Rails app using both the legacy url, and the new url from any of Safari, Chrome, Firefox, or Internet Explorer. I can also 'HTTP post' updates to the Rails app using Safari, Chrome, or Firefox. All good. Except, attempts to post updates from Internet Explorer result in the Rails app rejecting the update, The Rails app log contains the message, ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): I have other domains & aliases working just fine on this same machine. Any suggestions as to what is causing the Rails app to reject posts from IE would be appreciated.

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • virtualized windows 2003 domain with CentOS 5.3 and poor connectivity

    - by Chris Gow
    Hi: I have a test lab set up running a virtualized windows 2003 domain on a CentOS 5.3(xen) host and am experiencing connectivity problems with guests running on other hosts that are part of the same domain. Here's the setup: On Computer A I have CentOS 5.3 running as the host and have virtualized windows 2003 servers for a primary domain controller, a backup domain controller and an exchange server. The primary domain controller also acts as a WINS and dns server. The windows domain appears on a separate subnet from my company's corporate network. Connectivity to any of the virtualized guests on Computer A is fine (remote desktop, ping, what have you). I have another host computer (Computer B) that also has a virtualized Windows 2003 server guest that is part of the same domain. However, connectivity to that guest is flaky at best. I continuously get at least 60% packet loss when I try to ping the guest, and due to that flakiness I can not access any of the services that it runs (remote desktop, web). Now here's the interesting part. It seems to affect only machines running on a different computer than the domain controller that are in the same domain. On Computer B there is another Windows 2003 guest that is not part of the test domain and is on my corporate network. There's no connectivity issues with that guest machine. The problem does not seem to be specific to Computer B either. I created a test VM on my local computer within the test domain and it exhibits the same behaviour as the guest in Computer B. A couple of items to note: - Host OS on both Computer A and B are the same CentOS 5.3 64 bit - Guest OS is Windows 2003 64 bit and 32 bit (the guest on Computer B is 32 bit) - Guest OSes are all up to date (as of Monday) - Host OS on Computer A was upgraded from CentOS 5.2 to 5.3 Update: Sorry I did not follow up with the comments from below. Computer A and B have been moved to their own dedicated switch and the problem has gone away. I'm not sure what the underlying problem(s) were though

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • How do I push my initial snapshot to a subscriber server in SQL Server 2000?

    - by Kev
    I'm configuring Transactional Replication using the Push model. The scenario is: The SQL Servers: SQL01 (publisher) and SQL02 (subscriber) - both running SQL 2000 SP4. Both servers are standalone (i.e. not domain members) Both servers have their FQDN and NETBIOS names in their HOSTS files I've managed to configure SQL01 to publish my database and configured a Push subscription for SQL02 using the Push New Subscription wizard and set the Distribution Agent to update the subscription continuously. On the Push Subscription wizard "Initialise Subscription" page I've selected "Yes, initialise the schema and data" and ticked the "Start the Snapshot Agent to begin the initialisation process immediately" option. All the required services are running (SQL Agent). When I complete the wizard and browse the Replication - Publications folder I can see my publication (blue book with arrow). The publication shows the Push subscription and its status is Pending. If I look in the c:\Program Files\Microsoft SQL Server\Mssql\Repldata folder I see a number of T-SQL scripts for each table e.g. Products.bcp, Products.sch, Products.idx. What should happen now? Should my replicated database now (magically) appear on the subscription server?

    Read the article

  • Cannot Connect To VMWare Guest OS Using Either RDP or VNC

    - by Humanier
    I have a PC (Windows XP SP3) with VMWare Workstation 7 installed. The VMWare hosts Windows Server 2003 Enterprise Edition R2. RealVNC (4.1.3) is installed on both OS'es. Both of them use Hamachi2. Host OS (WinXP) also runs ZoneAlarm Firewall. Hamachi network is set as trusted. My goal is to allow RDP and VNC connections to be made to the guest OS (Windows Server 2003). Both options work absolutely fine if I connect from the host OS. However I have problems when other computers from our Hamachi network try to connect the guest OS (Win2K3). RDP connections. RDP window opens, shows black content and after 15-20 seconds displays following error: RealVCN connections. Users are able to connect but all they see is a black screen inside VNC window. At the same their input (keystrokes or mouse moves/clicks) are visible when looking at the console window of the Win2K3. I really appreciate any ideas on how to resolve the mentioned problems.

    Read the article

  • Can't route specific subnet thru VPN in ubuntu

    - by Disco
    I'm having issues routing traffic thru VPN. Here's my setup I have 3 hosts, let's call them A, B and Z B and Z have a VPN connection in the 10.10.10.x SUBNET A and B have a direct connection in the 10.10.12.x SUBNET I want to be able to route traffic from A to Z, like : A <= 10.10.12.254 [LAN] 10.10.12.111 => B <= 10.10.10.152 [VPN] 10.10.10.10 => Z On host B, i have set up ip_forwarding : net.ipv4.ip_forward = 1 and routing on host B: [root@hostA: ~]# ip route 10.10.10.10 dev ppp0 proto kernel scope link src 10.10.10.152 10.10.12.0/24 dev eth1 proto kernel scope link src 10.10.12.111 10.10.10.0/24 dev ppp0 scope link 169.254.0.0/16 dev eth1 scope link routing on host A: [root@hostA: ~]# ip route 10.10.10.0 via 10.10.12.111 dev eth1 10.10.12.0/24 dev eth1 proto kernel scope link src 10.10.12.254 169.254.0.0/16 dev eth1 scope link default via 192.168.1.1 dev eth0 But still not able to ping 10.10.10.10 from host A. Any idea ? I'm pulling my hairs out.

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • Apache2: Limit simultaneous requests & throttle bandwidth per IP/client?

    - by xentek
    I want to limit simultaneous requests & throttle bandwidth per IP/Client on a single apache vhost. In other words, I want to ensure that this site, which hosts large media files, doesn't get hammered by someone trying to download everything all at once (just happened the other night). I'd like to limit the outgoing transfer speed overall for this site, as well as limit the number of connections a single IP can make to the server to a sane default (i.e. within normal browser limits for multiple requests so page loads aren't effected too much). Bonus points if I can actually scope it to file types (i.e. leave web files alone, but apply these rules to just the media files). We're running Ubuntu 9.04 on all the servers, and have two apache/php servers being load balanced via Round Robin by a squid proxy server. MySQL is running on its own box as well. We've got plenty of bandwidth to give them, so I don't really want overall caps, but just want to throttle the amount of memory/CPU it takes to serve this site. There other sites on these servers that we don't want to apply these rules too, just want to keep this one from hogging all the resources. Let me know if you need more info! Thanks in advance for your suggestions!

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Should an HA failover occur in this scenario?

    - by joeqwerty
    I'm running vSphere 5 in an HA cluster across two hosts (vsphereA and vsphereB). I have the HA cluster configured for host monitoring and datastore heartbeat monitoring with admission control disabled (hopefully I rightfully understand that datastore heartbeat monitoring prevents inadvertent and unwanted HA failovers due to management network isolation). Each host has a single connection to a dedicated iSCSI network and iSCSI target (no MPIO). All vmdk's for all VM's exist on the iSCSI datastore. As a test of HA I disconnected the iSCSI connection on vsphereB and was surprised to see that the running VM's on vsphereB continued to run on vsphereB. The powered off VM's were showing as inaccessible (which I expected due to the fact that they weren't running and the connection from vsphereB to the iSCSI target was severed) but the running VM's continued to run and continued to be "owned" by vsphereB. I expected to see an HA failover occur for those VM's and expected to see them "owned" by vsphereA after the HA failover (which didn't occur). I'm at a loss to understand why an HA failover didn't occur for those VM's. Am I misunderstanding in which cases an HA failover should occur?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >