Search Results

Search found 18422 results on 737 pages for 'down town'.

Page 647/737 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Can I set up two computers up with the same monitors/keyboard/mouse in a modular way?

    - by CodeJunkie
    I have a desktop computer computer (running Windows 7), and a laptop (running OSX Mountain Lion, and maybe Ubuntu 12 eventually). When the laptop is at home, I want both the desktop and the laptop to use the same (2+) monitors, the keyboard, and the mouse (or mice, if I add a track pad). I know about KVM switches, but I want something more complicated. I like to use Synergy to use both computers with one keyboard and mouse at the same time. Synergy requires that the keyboard and mouse be connected to one computer (the server), which shares them with other computers (clients) over wifi. the issue is that when one computer isn't logged in, Synergy doesn't work on it. Sometimes, I want my laptop to be the server (physically connected to the keyboard and mouse), and sometimes I want my desktop to be the server. This means that I need the keyboard/mouse/other USB devices to be able to switch computers without me playing musical plugs. To complicate things further, I don't always want the same desktop set up in terms of monitors. Sometimes, I want the desktop to have both monitors. Other times, I want the laptop to control both monitors. Sometimes I want the desktop to control one monitor, and the laptop to control the other. In any case, the keyboard and mouse need to be able to be physically connected to either computer without lots of fussing with plugs. This breaks down to at least this set of possible combinations: Desktop controls both monitors, and has a physical connection to keyboard and mouse Laptop controls both monitors, and has a physical connection to keyboard and mouse Desktop and laptop each control a monitor, but the desktop has a physical connection to the keyboard and mouse (which it shares with the laptop via wifi) Desktop and laptop each control a monitor, but the laptop has a physical connection to the keyboard an mouse (which it shares with the desktop via wifi) some usb devices connected via a usb hub need to be able to switch physical connection between computers, ideally without the keyboard and mouse switching computer connection There may be other combinations, but these are the main ones at the moment. Basically, I need a KVM switch which allows me to switch individual monitors/keyboard/mouse/usb hub between computers independently of each other, or a better solution. How can I set two computers up with the same monitors/mice/keyboard/usb hub without having to switch everything to one computer or the other all at the same time?

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • iis not listening on port 80

    - by user57467
    We have server 2003 and ISA 2004 with IIS 6 on same machnie. Everything worked well till yesterday, when we try to make some new rule in ISA..but this is a long story... Unfortunatelly something happend with our intranet site. Our site is on the port 80, but if we try to open on this client machines then we got and error page (which error page is our provider): 403-forbidden; Remote host not listening, the remote host is not prepared to acceppt the connection request. On the server i can open the site with port 80. If i change the port number in the iis and try to open the site with the port, then works well. I try to shut down IIS and start apache with a simple page. On the server works well but in clients the problem is the same, so i think this is not an IIS related problem. In the ISA we have a web pub rule, with port 80, no auth. Im pulling out my hair, please help. after uninstall and reinstall ISA, de sites work well, till i configure the upstream proxy in the conf/network/web chaining menu and then everything went same... So something wrong with the web-proxy / upstream function... (all my http request forward to my upstream proxy). That was the set long time ago...but a few day ago somehing went wrong... I think maybee our ISP spoiled something..tomorrow i try to figure out... But one more thing: I make a new rule before the default rule in the conf/network/web chaining menu. Every request go to the server not redirected.. Redirect to upstream server.... So if the request goes to our server (our site) then handled locally, and if not then go to upstream proxy and voilllaaa....i tougth... But unfortunatelly: our website work well, but internet work extreamly slowly..:( Maybee with single adapter i can made this? I have to handle all request locally or i have to send all to upstream? I cant filter it?

    Read the article

  • mailman web UI on localhost with apache2

    - by Thufir
    I'm interested only in running mailman on localhost and would like access to the web interface, but am getting 404: root@dur:~# root@dur:~# ln -s /etc/mailman/apache.conf /etc/apache2/sites-enabled/mailman -v `/etc/apache2/sites-enabled/mailman' -> `/etc/mailman/apache.conf' root@dur:~# root@dur:~# service apache2 restart * Restarting web server apache2 ... waiting . [ OK ] root@dur:~# root@dur:~# curl http://localhost/mailman/admin/ <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>404 Not Found</title> </head><body> <h1>Not Found</h1> <p>The requested URL /mailman/admin/ was not found on this server.</p> <hr> <address>Apache/2.2.22 (Ubuntu) Server at localhost Port 80</address> </body></html> root@dur:~# root@dur:~# tail /var/log/apache2/error.log [Mon Aug 27 13:08:02 2012] [error] [client 127.0.0.1] File does not exist: /var/www/mailman [Mon Aug 27 13:10:16 2012] [error] [client 127.0.0.1] File does not exist: /var/www/mailman [Mon Aug 27 13:29:27 2012] [notice] caught SIGTERM, shutting down [Mon Aug 27 13:29:27 2012] [error] python_init: Python version mismatch, expected '2.7.2+', found '2.7.3'. [Mon Aug 27 13:29:27 2012] [error] python_init: Python executable found '/usr/bin/python'. [Mon Aug 27 13:29:27 2012] [error] python_init: Python path being used '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload'. [Mon Aug 27 13:29:27 2012] [notice] mod_python: Creating 8 session mutexes based on 6 max processes and 25 max threads. [Mon Aug 27 13:29:27 2012] [notice] mod_python: using mutex_directory /tmp [Mon Aug 27 13:29:28 2012] [notice] Apache/2.2.22 (Ubuntu) mod_python/3.3.1 Python/2.7.3 mod_ruby/1.2.6 Ruby/1.8.7(2011-06-30) configured -- resuming normal operations [Mon Aug 27 13:29:58 2012] [error] [client 127.0.0.1] File does not exist: /var/www/mailman root@dur:~# Although I did have to tinker a bit with mailmain to get that recognized. While I don't need to setup web access using MM list passwords, I would like to setup web admin to add/remove mailing lists. How do I configure apache or mailman so that I can navigate to http://localhost/mailman/admin/? As per installing mailman, I setup aliases as so: root@dur:~# root@dur:~# cat /etc/aliases usenet: root ## mailman mailing list mailman: "|/var/lib/mailman/mail/mailman post mailman" mailman-admin: "|/var/lib/mailman/mail/mailman admin mailman" mailman-bounces: "|/var/lib/mailman/mail/mailman bounces mailman" mailman-confirm: "|/var/lib/mailman/mail/mailman confirm mailman" mailman-join: "|/var/lib/mailman/mail/mailman join mailman" mailman-leave: "|/var/lib/mailman/mail/mailman leave mailman" mailman-owner: "|/var/lib/mailman/mail/mailman owner mailman" mailman-request: "|/var/lib/mailman/mail/mailman request mailman" mailman-subscribe: "|/var/lib/mailman/mail/mailman subscribe mailman" mailman-unsubscribe: "|/var/lib/mailman/mail/mailman unsubscribe mailman" root@dur:~# Perhaps these can be used somehow?

    Read the article

  • Two network adapters on Ubuntu Server 9.10 - Can't have both working at once?

    - by Rob
    I'm trying to set up two network adapters in Ubuntu (server edition) 9.10. One for the public internet, the other a private LAN. During the install, I was asked to pick a primary network adapter (eth0 or eth1). I chose eth0, gave the installer the details listed below in the contents of /etc/network/interfaces, and carried on. I've been using this adapter with these setting for the last few days, and every thing's been fine. Today, I decide it's time to set up the local adapter. I edit the /etc/network/interfaces to add the details for eth1 (see below), and restart networking with sudo /etc/init.d/networking restart. After this, attempting to ping the machine using it's external IP address fails, but I can ping it's local IP address. If I bring eth1 down using sudo ifdown eth1, I can successfully ping the machine via it's external IP address again (but obviously not it's internal IP address). Bringing eth1 back up returns us to the original problem state: external IP not working, internal IP working. Here's my /etc/network/interfaces (I've removed the external IP information, but these settings are unchanged from when it worked) rob@rhea:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary (public) network interface auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # The secondary (private) network interface auto eth1 iface eth1 inet static address 192.168.99.4 netmask 255.255.255.0 network 192.168.99.0 broadcast 192.168.99.255 gateway 192.168.99.254 I then do this: rob@rhea:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... [ OK ] rob@rhea:~$ sudo ifup eth0 ifup: interface eth0 already configured rob@rhea:~$ sudo ifup eth1 ifup: interface eth1 already configured Then, from another machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for [external ip]: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Back on the Ubuntu server in question: rob@rhea:~$ sudo ifdown eth1 ... and again on the other machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Ping statistics for [external ip]: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms So... what am I doing wrong?

    Read the article

  • How to transition to Comcast with static IP address

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Real server, Multiple IP Addresses, HyperV Virtual Server, How to partition IPs across real and Virtual NICs

    - by Steven_W
    This is a slightly difficult problem to explain without same basic background information - I'll try and refine the question later as necessary Originally, I have a single hosted server (Win 2008R2) with the following range of 8 IP addresses. - Single NIC - IP: x.x.128.72 -> x.x.128.79 - Subnet: x.x.255.192 - GW: x.x.128.65 After installing Hyper-V and setting up a single virtual server on the same box, I then wanted to assign one of the IP addresses to the virtual server, leaving everything else running normally. -- Firstly, I tried using the "External" network, but (even after setting IPs on the "Virtual Adapter" similar to Here but struggled to get networking running at all. I needed to keep the server running (otherwise I would have spent more time pursuing this approach) Q1 ... Was this a sensible thing to do ? Should I have carried on down this route ? -- I then decided to try different approach - Set the HyperV network to "Internal" (visible to Management OS) - Physical NIC - IP: x.x.128.72 -> x.x.128.75 - Subnet: x.x.255.192 - GW: x.x.128.65 - Virtual NIC - IP: x.x.128.78 - Subnet: x.x.255.252 - GW: x.x.128.72 ... { The same as the IP of the physical NIC ) - Virtual OS-NIC - IP: x.x.128.77 - Subnet: x.x.255.252 - GW: x.x.128.78 ... { The same as the IP of the host virtual-NIC ) -- Surprisingly enough, this approach actually worked, and I was able to connect from all the following: - Internet to/from physical NIC (x.x.128.72) - physical NIC (x.x.128.72) to virtual-OS-NIC (x.x.128.77) e.g. testing via ping + FTP - Internet to/from virtual-OS-NIC (x.x.128.72) -- The problem I have is that this approach seems to only last for a short while (a few hours). After this time, it seems that I lose the ability to connect from Virtual-OS-NIC to/from the internet (but I can still connect from the host-OS to the virtual-OS and from the host-OS to the internet) I have re-tested this a couple of times with the same results ... I leave the server on for a few hours (e.g. overnight), and when I come back in the morning, the Virtual-OS loses the ability to route to the internet -- I'm not quite sure what to look at next (or whether I'm going about this completely the wrong way ) One "possible relevant item" is that the host-OS is also running RRAS (Routing and Remote Access), but this is only to run a simple VPN -- Q2 - Wheat should I be looking at next ? (Any good references / recommendations of what to try) Would appreciate any thoughts or comments (even if you tell me I'm going about this the wrong way)

    Read the article

  • Apache : Illegal override option FileInfo

    - by Kave
    I have installed a new Ubuntu 12.04 Server and setup Apache and MySQL. I am just trying to replicate what I have in my current server and came across one single problem. - FileInfo Within these two files below: /etc/apache2/sites-available/default-ssl /etc/apache2/sites-available/default I need to add some overrides for the apache server. Original: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> New: <Directory /var/www/MySite> Options Indexes FollowSymLinks MultiViews AllowOverride FileInfo, Indexes Order allow,deny allow from all </Directory> I have installed the following mods for Apache: sudo apt-get install lamp-server^ -y sudo apt-get install apache2.2-common apache2-utils openssl openssl-blacklist openssl-blacklist-extra -y sudo apt-get install curl libcurl3 libcurl3-dev php5-curl -y sudo apt-get install php5-tidy -y sudo apt-get install php5-gd -y sudo apt-get install php-apc -y sudo apt-get install memcached -y sudo apt-get install php5-memcache -y sudo a2enmod ssl sudo a2enmod rewrite sudo a2enmod headers sudo a2enmod expires sudo a2enmod php5 So When I do a restart with AllowOverride None, its all ok. sudo /etc/init.d/apache2 restart * Restarting web server apache2 ... waiting [OK] But as soon as I change the AllowOverride to FileInfo, Indexes Syntax error on line 11 of /etc/apache2/sites-enabled/000-default: Illegal override option FileInfo, Action 'configtest' failed. The Apache error log may have more information. ...fail! I can't see anything unusual in the error.log [Wed Jun 06 08:23:51 2012] [notice] caught SIGTERM, shutting down [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [warn] RSA server certificate CommonName (CN) `mySite.com' does NOT match server name!? [Wed Jun 06 08:23:52 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.1 with Suhosin-Patch mod_ssl/2.2.22 OpenSSL/1.0.1 configured -- resuming normal operations I get that warning because its a test server, nonetheless I get the same warning with AllowOverride None and yet it restarts the Apache server correctly. Therefore this warning should be harmless. Have I missed something? Thanks,

    Read the article

  • Forbidden access on Apache in Mac Lion

    - by Luis Berrocal
    I'm trying to configure Apache to work with Symfony in my Macbook Pro. I Have installed Lion OSX. I uncommented the line Include /private/etc/apache2/extra/httpd-vhosts.conf on /etc/apache2/httpd.conf. I configured Apache by editing the /private/etc/apache2/extra/httpd-vhosts.conf. and adding the following: :: NameVirtualHost *:80 <VirtualHost *.80> ServerName localhost DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/luiscberrocal/Documents/dev/lion_test/web" ServerName lion.localhost <Directory "/Users/luiscberrocal/Documents/dev/lion_test/web"> Options Indexes FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> 3. Added the following to /private/etc/hosts 127.0.0.1 lion.localhost Now when I access http://localhost/test.php I get the following message Forbidden You don't have permission to access /test.php on this server. Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch Server at localhost Port 80 I already tried: chmod 777 test.php chmod +x test.php I get the same message if I try to access http://lion.localhost/ I opened the /var/log/apache2/error_log and this is what I found relevant: [Sat Dec 31 09:37:49 2011] [notice] Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch configured -- resuming normal operations [Sat Dec 31 09:37:53 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 09:37:55 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 09:38:13 2011] [notice] caught SIGTERM, shutting down [Sat Dec 31 09:38:13 2011] [error] (EAI 8)nodename nor servname provided, or not known: Could not resolve host name *.80 -- ignoring! httpd: Could not reliably determine the server's fully qualified domain name, using Luis-Berrocals-MacBook-Pro.local for ServerName [Sat Dec 31 09:38:14 2011] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Sat Dec 31 09:38:14 2011] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Sat Dec 31 09:38:14 2011] [notice] Digest: generating secret for digest authentication ... [Sat Dec 31 09:38:14 2011] [notice] Digest: done [Sat Dec 31 09:38:14 2011] [notice] Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch configured -- resuming normal operations [Sat Dec 31 09:38:18 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 09:38:19 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 10:18:09 2011] [error] [client 127.0.0.1] (13)Permission denied: access to /test.php denied [Sat Dec 31 10:18:15 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied I can't figure out what I'm doing wrong.

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • Can't connect to DeploymentShare$ from PC attempting to MDT, but can other PCs on the network

    - by Moman10
    I am in the process of setting up MDT and have run across a problem. MDT is installed on a Windows 2012 server, MDT version 6.2.5019.0. Using WDS as well. Active Directory domain, the server is up to date and on the network. I boot up the PC, it gets an address from DHCP, pulls down the LiteTouchPE_x64.wim image and goes into the MS Solution Accelerators screen, the Processing Bootstrap Settings box comes up and processes for a couple of seconds, then goes away, it sits there for another minute or so and then gives the error: A connection to the deployment share (\\Acme-MDT\DeploymentShare$) could not be made. Can not reach the DeployRoot. Possible Cause: Network Routing error or Network Configuration Error." I can then retry or cancel. I have seen this error online but so far nothing that helps fix it, but seems to be an issue with the FQDN. I verified that I am getting an IP address and that I can successfully ping the MDT server if I use the FQDN, but can not just by it's A record of Acme-MDT. I tried manually mapping the network share using net use and it works if I use the FQDN, but it fails with an error code 53, "Network path not found" if I just use the A record of Acme-MDT. Here is the net use command I'm using: net use * \\Acme-MDT\DeploymentShare$ /u:Domain\Administrator It gives the error System Error 53, Network path not found (and doesn't prompt for a password), but if I use the FQDN of \\Acme-MDT.domain.com\DeploymentShare$ it works fine to map the drive. I guess the problem is, when it tries to load the image, it is trying to start from \\Acme-MDT\DeploymentShare$ and I need it to start from \\Acme-MDT.domain.com\DeploymentShare$, but not sure how to get it to do that. I've put the fully qualified path in CustomSettings.ini and bootstrap, updated the deployment share, regenerated the boot image and replaced the boot wim in WDS. Or, if someone has an idea as to why it's acting this way and knows a way around it. The end result is what matters! :) I did verify in DNS that Acme-MDT is there, with the proper IP, and I can successfully use the net use command to map this drive from a couple other computers that are already on the network. I am assuming it has something to do with that computer not already being part of the domain, but I'm honestly at a loss as to how to fix it. Any ideas are appreciated, thanks in advance for your help!

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • Breaking the Outlook 2010 e-mail blue quote line for inline responses

    - by Jez
    This has to be the most infuriating regression from Outlook 2003 to 2007. It also exists the same in Outlook 2010, as far as I can tell. When you reply to an HTML e-mail message in Outlook, the quoted text has a blue line down the side, and is usually at the bottom of the message: Now in Outlook 2003, when replying to HTML-formatted messages in Outlook, you used to be able to reply inline quite easily, by getting to the point in the quoted message you wanted to reply to, and pressing the 'decrease indent' button: Since Outlook 2007 (and 2010), they replaced the e-mail editor with Microsoft Word. This means the blue line is implemented in a different way; it uses a blue left border. This makes it tougher to break the line up. After much ado, I found a couple of pages that said that you could remove all formatting by pressing ctrl-Q, which would remove the blue line next to the cursor and allow inline replies: OK, not too bad on the face of it. I can live with that. But here's the kick in the teeth; try sending that mail. I'll send it to myself. What do I receive? This: Outlook 2010 reinstated the blue line, where I had removed it, upon my sending the e-mail! For God's sake! The two pages I linked to above don't seem to address Outlook's reinstating of the blue line upon sending. So, does anyone know how you can actually reply inline in Outlook 2010 (or Outlook 2007) e-mail without the blue line being reinstated? Before anyone says, I do not want to convert the message to plaintext, and I do not want to just indent replies and have to manually build the blue line myself. I want something like the Outlook 2003 behaviour; I reply, Outlook creates the blue line, and I can break it up with inline replies, send it, and my inline formatting stays. My hopes aren't high - Microsoft seem to have gone to some trouble to actively prevent inline replies here, for some reason - but I'd appreciate anyone's insights. Cheers!

    Read the article

  • How to transition to Comcast with static IP address [migrated]

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • Understanding vhosts settings

    - by Matt
    Ok so i have a server and I want to put a few applications on and i am having vhost configuration problems. Here is what i have and I want some direction on what i am doing wrong...ok so the first file is /etc/apache2/ports.conf NameVirtualHost 184.106.111.142:80 Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> then i have /etc/apache2/sites-available/somesite.com <VirtualHost 184.106.111.142:80> ServerAdmin [email protected] ServerName somesite.com ServerAlias www.somesite.com DocumentRoot /srv/www/somesite.com/ ErrorLog /srv/www/somesite.com/logs/error.log CustomLog /srv/www/somesite.com/logs/access.log combined <Directory "/srv/www/somesite.com/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> when i visit somesite.com everything works great but when i add another vhost and lets say thats named anothersite.com. So i have /etc/apache2/sites-available/anothersite.com <VirtualHost 184.106.111.142:80> ServerAdmin [email protected] ServerName anothersite.com ServerAlias www.anothersite.com DocumentRoot /srv/www/anothersite.com/ ErrorLog /srv/www/anothersite.com/logs/error.log CustomLog /srv/www/anothersite.com/logs/access.log combined <Directory "/srv/www/anothersite.com/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> then i run the following commands >> sudo a2ensite anothersite.com Enabling site anothersite.com. Run '/etc/init.d/apache2 reload' to activate new configuration! >> /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit anothersite.com or somesite.com they are both down..What is going on with the vhosts. Could it be the NameVirtualHost declaration with the ip or something...maybe my understanding of vhost settings is not clear. What i dont understand is why do both site now all the sudden not work at all.I would highly appreciate the clarity By the way anothersite.com or somesite.com are the only things I changed to make it more readable

    Read the article

  • Trac problem: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule"

    - by Janosch
    This problem already has been described mutliple times in different mailing lists, but no solution has yet been published. My original setup is as follows (but in the mean time i have a simpler one on Windows 7): Ubuntu server with apache 2.2 and python 2.7 virutal python environment created with virtualenv installed babel, genshi and trac in this order using pip in the virtual environment Trac seems to run fine with tracd, but when visiting it through apache, i get the following error in an official trac error page: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule" The stacktrace looks like this: Traceback (most recent call last): File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py",line 473, in _dispatch_request dispatcher.dispatch(req) File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py", line 154, in dispatch chosen_handler = self.default_handler File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/config.py", line 691, in __get__ self.section, self.name)) AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule". Please update the option trac.default_handler in trac.ini. I already tried a lot to get down to the root of the problem, for me it looks as if all nativ trac components refuse to load. When one explicitely imports these components in the wsgi handler, some of them start to work somehow. Since i suspected the virtual environment, i dropped it, and manually copied all dependencies (babel, genshi, trac, ..) to one directory, and added this directory to system.path in the wsgi handler. I get exactly the same error. Since this setup is now independend from the environment, one can easily try it out on any other machine (windows or linux) running apache 2 and python 2.7. On my Windwos 7 machine, i got exactly the same problem. I zipped together the whole bundle, one can download it from http://www.xterity.de/tmp/trac-installation.zip . In the apache configuration (Windows 7 machine) i use the following settings: Alias /trac/chrome/common "D:/workspace/trac-installation/trac-resources/common" Alias /trac/chrome/site "D:/workspace/trac-installation/trac-resources/site" WSGIScriptAlias /trac "D:/workspace/trac-installation/apache/handler.wsgi" <Directory "D:/workspace/trac-installation/trac-resources"> Order allow,deny Allow from all </Directory> <Directory "D:/workspace/trac-installation"> Order allow,deny Allow from all </Directory> <Location "/trac"> Order allow,deny Allow from all </Location> And my handler.wsgi looks like this: import os import sys sys.path.append('D:/workspace/trac-installation/dependencies/') os.environ['TRAC_ENV'] = 'D:/workspace/trac-installation/trac-environments/Esp004' os.environ['PYTHON_EGG_CACHE'] = 'D:/workspace/trac-installation/eggs' import trac.web.main application = trac.web.main.dispatch_request Has anybody got an idea what could be the problem, or how to find out where it comes from?

    Read the article

  • MySQL daemon keeps terminating unexpectedly

    - by Yehia A.Salam
    The MySQL daemon on my CentOS server keeps crashing, i got the logs from /var/logs/mysqld but still i am not sure how to fix this: 121114 16:22:56 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 121114 21:55:11 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121114 21:55:11 [Note] Plugin 'FEDERATED' is disabled. 121114 21:55:11 InnoDB: The InnoDB memory heap is disabled 121114 21:55:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 21:55:11 InnoDB: Compressed tables use zlib 1.2.3 121114 21:55:11 InnoDB: Using Linux native AIO 121114 21:55:11 InnoDB: Initializing buffer pool, size = 128.0M 121114 21:55:11 InnoDB: Completed initialization of buffer pool 121114 21:55:11 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121114 21:55:11 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121114 21:55:12 InnoDB: Waiting for the background threads to start 121114 21:55:13 InnoDB: 1.1.6 started; log sequence number 77177262 121114 21:55:13 [Note] Event Scheduler: Loaded 0 events 121114 21:55:13 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.12' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Remi 121115 00:19:44 mysqld_safe Number of processes running now: 0 121115 00:19:44 mysqld_safe mysqld restarted 121115 0:19:47 [Note] Plugin 'FEDERATED' is disabled. 121115 0:19:47 InnoDB: The InnoDB memory heap is disabled 121115 0:19:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121115 0:19:47 InnoDB: Compressed tables use zlib 1.2.3 121115 0:19:47 InnoDB: Using Linux native AIO 121115 0:19:47 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121115 0:19:47 InnoDB: Completed initialization of buffer pool 121115 0:19:47 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121115 0:19:47 [ERROR] Plugin 'InnoDB' init function returned error. 121115 0:19:47 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121115 0:19:47 [ERROR] Unknown/unsupported storage engine: InnoDB 121115 0:19:47 [ERROR] Aborting Edit #1 total used free shared buffers cached Mem: 496 370 126 0 24 110 -/+ buffers/cache: 234 261 Swap: 1023 9 1014 Edit #2 Also, largest table in my mysql is 20MB, so my the memory used should be pretty moderate. SELECT CONCAT(table_schema, '.', table_name), CONCAT(ROUND(table_rows / 1000000, 2), 'M') rows, CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') DATA, CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') idx, CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size, ROUND(index_length / data_length, 2) idxfrac FROM information_schema.TABLES ORDER BY data_length + index_length DESC LIMIT 10;

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • iPhone 3.1 Black Screen

    - by churnd
    Since I've updated my 3G iPhone to 3.1, it will become unresponsive after ~4-5 hours. It looks like it's off, but it's not. None of the buttons do anything that I can see. The only way I can get a screen back is to do a hard reset (Power + Home). Has anyone else had this problem? What can I do to fix it? I've had it 3 happen 3 times already. Very annoying. Apparently, the Discussions forum on Apple.com also shows lots of people are having the same problem. One guy said he did a restore and that fixed it for him, which I haven't tried yet because many others also tried that and it didn't help. A few others have said turning off WIFI fixes it for them, so I'm going to try that today. Another thing I've noticed is that now, when I plug in my iPhone to my Macbook Pro, iTunes 9 does not launch. It used to before the update. ** Further update and possible solution ** I left my iPhone plugged into my Macbook Pro yesterday while at work, and didn't use it much throughout the day. I noticed it didn't lock up once. I suspect this was because it was connected to a power source. I continued to use my iPhone with no problem yesterday and today. Still no lockups as of now. I personally think this was Spotlight and/or Genius "doing it's thing" from a new install. Kinda like an OS upgrade on your Mac... Spotlight has to rebuild itself and those first couple of hours can be kinda sluggish. This would be even more so on the iPhone where processing power is limited, and I'm sure the processor is clocked down a bit while on battery power, which would further amplify the problem. Again, just my guess. My iPhone is working fine now. ** Final Solution ** Update to 3.1.2. Problem solved.

    Read the article

  • Python error after installing libboost-all-dev on debian [migrated]

    - by Cameron Metzke
    A friend of mine wanted the liboost libraries installed on our shared computer so after installing libboost-all-dev 1.49.0.1 ( A debian wheezy machine ), I get this error when using the "pydoc modules" command on the commandline. It spits out the following error -- root@debian:/usr/include/c++/4.7# pydoc modules Please wait a moment while I gather a list of all available modules... **[debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 357 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 230 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../orte/runtime/orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "A system-required executable either could not be found or was not executable by this user" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** The MPI_Init() function was called before MPI_INIT was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [debian:49065] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!** root@debian:/usr/include/c++/4.7# I tried looking into the problem and ended up uninstalling the following to get it to work again. openmpi common all 1.4.5-1 libibverbs-dev amd64 1.1.6-1 libopenmpi-dev amd64 1.4.5-1 mpi-default-dev amd64 1.0.1 libboost-mpi-python1.49.0 although pydoc works again, I'm assuming the packages I removed are gunna hurt somethiong else down the track ? As you guessed im not a c/c++ programmer. So I guess my question is, will this hurt something later ? is their a way to install those packages without hurting python ?

    Read the article

  • libvirt qemu/kvm migration problem

    - by Panda
    I am using kvm and libvirt on my Dell server. Now i am trying to migrate one virtual machine from a physical server to another. However, I failed everytime. In virsh on physicalServer1, I typed: virsh # migrate virtualmachine1 qemu+ssh://username@physicalServer2/system error: operation failed: migration to 'tcp:physicalServer2:49163' failed: migration failed Then I searched FAQ part on libvirt.org. It says: error: operation failed: migration to '...' failed: migration failed This is an error often encountered when trying to migrate with QEMU/KVM. This typically happens with plain migration, when the source VM cannot connect to the destination host. You will want to make sure your hosts are properly configured for migration (see the migration section of this FAQ) I managed to ssh physicalServer2 from a shell on virtualmachine1 so the above red part did not explain my failure. I also open ports on physicalServer2, iptables -L shows following information: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:49152:49215 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer2: 2011-05-06 13:37:30.708: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name openjudge-test -uuid a8c704bc-a4f9-90db-3e57-40e60b00aac1 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/virtualmachine1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -drive file=/media/nfs/virtualmachine1.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:16:36:8a:22 :a0,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:2 -vga cirrus -incoming tcp:0.0.0.0:49163 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/0 2011-05-06 13:37:30.915: shutting down The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer1 is empty. Both physical servers are using Ubuntu 11.04. The libvirt and kvm used are installed by apt-get. The libvirt version is 0.8.8.

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >