Search Results

Search found 12484 results on 500 pages for 'host'.

Page 412/500 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Debian Stable: Can't update kernel, libc won't update.

    - by pascal
    I use Debian Stable (squeeze) on a virtual host where I can't touch the kernel, it's stuck (and will be for some time as support told me) at Linux 2.6.18-028stab070.3 #1 SMP Wed Jul 21 18:33:27 MSD 2010 x86_64 So when I try to update, several packages fail with FATAL: kernel too old for example Preparing to replace libgcc1 1:4.6.0-11 (using .../libgcc1_1%3a4.6.1-1_amd64.deb) ... Unpacking replacement libgcc1 ... Setting up libgcc1 (1:4.6.1-1) ... FATAL: kernel too old Segmentation fault dpkg: error processing libgcc1 (--configure): subprocess installed post-installation script returned error exit status 139 and some version chaos ensued: The following packages have unmet dependencies: libc-dev-bin : Depends: libc6 (> 2.13) but 2.11.2-13 is installed libc6 : Depends: libc-bin (= 2.11.2-13) but 2.13-5 is installed libc6-dev : Depends: libc6 (= 2.13-5) but 2.11.2-13 is installed libquadmath0 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed libstdc++6 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed locales : Depends: glibc-2.13-1 What should I do? I want to keep the system up-to-date, so I want to pin as few packets as possible, but I also don't want to have to compile anything manually. Trying to pin the status quo and figured out where the error came from: ldconfig segfaults. -v doesn't print anything more so I can't tell what's the actual problem. # ldconfig FATAL: kernel too old

    Read the article

  • apache2 server running ruby on rails application has go daddy cert that works in chrome/firefox and ie 9 but not ie 8

    - by ryan
    I have a rails application up on a linode ubuntu 11 server, running apache2. I have a cert purchased from godaddy, (where we also bought our domain) and the cert is installed on my server. Part of my virtual host file: ServerName my_site.com ServerAlias www.my_site.com SSLEngine On SSLCertificateFile /path/my_site.com.crt SSLCertificateKeyFile /path/my_site.com.key SSLCertificateChainFile /path/gd_bundle.crt The cert works fine in Chrome, FireFox and IE 9+ but in IE 8- I get this error: There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. I'm hosting multiple rails apps on this same server (4 right now plus some old php sites that don't need ssl). I have tried googling every possible combination of the error/situation that I could think of but at this point I'm shooting in the dark. The closest I could come up with is that some versions if IE don't support SNI. But that doesn't apply here because I am getting the warning on windows 7 machines running IE 8, and the SNI only seemed to apply to IE 8 if the operating system was windows XP. So why is this cert being accepted by all browsers but giving me a warning in IE 8? Edit: So doing a little more digging and I figured out some more. It turns out this is effecting IE 9 as well. However the problem seems to be that IE is not traversing the ssl chain to get to the right cert. FireFox and Chrome when I go to view certificate show the correct one, but IE is showing one of our other sites certificates. REAL QUESTION HERE: That being the case why is IE not getting the right certificate when others are and how do I fix it?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • How to turn off Tomcat logging in Eclipse?

    - by kirdie
    I develop a Vaadin project in Eclipse that I start through Tomcat 6 which gets started directly by Eclipse. Tomcat prints an enormous amount of log messages though on each start which makes it hard to see the output of my own Application. I have already replaced all log levels in tomcat6/conf/logging.properties by WARNING (e.g. java.util.logging.ConsoleHandler.level = WARNING) but I still get many INFO messages. How can I turn this off or restrict the log messages to WARNING? An example of the messages Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: Loaded APR based Apache Tomcat Native library 1.1.24. Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. Okt 26, 2012 12:16:36 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:saim' did not find a matching property. Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol init INFO: Initializing Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 879 ms Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.32 Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol start INFO: Starting Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 568 ms

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • How to use basic auth for single file in otherwise forbidden Apache directory?

    - by mit
    I want to allow access to a single file in a directory that is otherwise forbidden. This did not work: <VirtualHost 10.10.10.10:80> ServerName example.com DocumentRoot /var/www/html <Directory /var/www/html> Options FollowSymLinks AllowOverride None order allow,deny allow from all </Directory> # disallow the admin directory: <Directory /var/www/html/admin> order allow,deny deny from all </Directory> # but allow this single file:: <Files /var/www/html/admin/allowed.php> AuthType basic AuthName "private area" AuthUserFile /home/webroot/.htusers Require user admin1 </Files> ... </VirtualHost> When I visit http://example.com/admin/allowed.php I get the Forbidden message of the http://example.com/admin/ directory. How can I make an exception for allowed.php? If not possible, maybe I could enumerate all forbidden files in another Files directive? Let's say admin/ contains also user.php and admin.php which should be forbidden in this virtual host.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • Remote paging with Nagios when network is down and email won't work -- cellular modems and alternatives

    - by Quinten
    What is the best option for remote paging when network services are down? I'm looking for a solution that can let me know when network services are down during off-hours only, and especially when email/smtp services are out. Therefore, it needs to be redundant to our network and power supply. I'm imagining a cellular modem is one option. What's the price range for these? Is anybody using them and feel that they are worth the cost? I'm imagining that it's something we would end up sending an emergency page ~ 1x/month at most, so I'd like the pricing to reflect that--I don't mind a high per-page cost as long as it has a low recurring cost. Another option would be to expose at least one server to remote ping, and run a check script on a remote server. Are there paid options for this? Currently, we run Nagios on a Linux VM on a Windows 2008 Hyper-V host. It would be great if the solution would work in that environment, but I know it's tricky with external devices, and we could move Nagios to a standalone workstation if needed.

    Read the article

  • Configuring network route between two routers on home network

    - by Paul
    I have a home network - the main router connected to the internet (and has wifi) is a Netopia box. Connected to it is a Linksys router. Everything currently works - I can connect via the wireless network and get to the internet. Machines connected to the Linksys can connect with each other and connect to the internet. Both routers are configured to serve addresses via DHCP (Netopia 192.168.1.1 - 192.168.1.99), Linksys (192.168.0.1 - 192.168.0.100). Here's how they are connected: Internet <-> Netopia w/wifi (192.168.1.254) <-> Linksys (192.168.0.1) I decided I really need to allow wireless connections to also communicate with machines behind the Linksys router. Currently the Linksys is configured to obtain an IP address via DHCP. I thought this would be straightforward. I configured the Linksys to have a static IP address: IP: 192.168.1.100 Mask: 255.255.255.0 GW: 192.168.1.254 Then I configured a static route on the Netopia: Network: 192.168.0.0 Mask: 255.255.255.0 GW: 192.168.1.100 So it should now look like this: Internet <-> Netopia w/wifi (192.168.1.254) <-> (192.168.1.100) Linksys (192.168.0.1) I reset both routers. I cannot ping the Netopia (192.168.1.254) from inside the Linksys network, and if I attempt to ping 192.168.0.1 from a wifi connection I get a "Destination host not available" error. Obviously I'm missing something, but I'm not sure where. Any ideas on what I'm missing?

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • Selecting Interface for SSH Port Forwarding

    - by Eric Pruitt
    I have a server that we'll call hub-server.tld with three IP addresses 100.200.130.121, 100.200.130.122, and 100.200.130.123. I have three different machines that are behind a firewall, but I want to use SSH to port forward one machine to each IP address. For example: machine-one should listen for SSH on port 22 on 100.200.130.121, while machine-two should do the same on 100.200.130.122, and so on for different services on ports that may be the same across all of the machines. The SSH man page has -R [bind_address:]port:host:hostport listed I have gateway ports enabled, but when using -R with a specific IP address, server still listens on the port across all interfaces: machine-one: # ssh -NR 100.200.130.121:22:localhost:22 [email protected] hub-server.tld (Listens for SSH on port 2222): # netstat -tan | grep LISTEN tcp 0 0 100.200.130.121:2222 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::80 :::* LISTEN Is there a way to make SSH forward only connections on a specific IP address to machine-one so I can listen to port 22 on the other IP addresses at the same time, or will I have to do something with iptables? Here are all the lines in my ssh config that are not comments / defaults: Port 2222 Protocol 2 SyslogFacility AUTHPRIV PasswordAuthentication yes ChallengeResponseAuthentication no GSSAPIAuthentication no GSSAPICleanupCredentials no UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes ClientAliveInterval 30 ClientAliveCountMax 1000000 UseDNS no Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • Unable to login through varnish cache

    - by ArunS
    I am setting up Active Collab Site in my new server. The setup is like below Internet --- varnish ---- apache But i am not able to login to the site through varnish cache.. But i can login to site through apache. Here is my VCL file backend default { .host = "localhost"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; }} When i try to login through varnish i was redirect back to login page. If i enter wrong password, then it will ask for enter correct password.

    Read the article

  • How to place a virtual machine in DMZ?

    - by Giordano
    I have an Ubuntu 12.04 server running few virtual machines with KVM. I would like to expose some of these virtual machines on the internet, to make it possible for customers to test the products we're developing and make available other products for demo purposes. One of the server NICs is configured with a public IP. However before exposing anything on the web I would like to be sure that if one of the virtual machines get compromised, the attacker doesn't reach the rest of the hosts. What I would like to do is to put these virtual machines into a DMZ. These are the steps I'm planning to do: Create a tap interface in the virtualization host (let's say tap1) Create a bridge using tap1 and give it an IP in a subnet separate from the other hosts. Let's say 10.0.0.1 Attach the DMZ virtual machines to the bridge and configure their IP statically (10.0.0.2, 10.0.0.3, etc...) Using UFW, forbid any traffic from 10.0.0.0/24 to any of the internal hosts, allow the traffic from the internal hosts towards 10.0.0.0/24 and expose the virtual machines on the web using port forwarding. Do you think this setup is safe? Can you suggest any improvement or a better/safer approach? Thanks in advance!

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • VMM 2012 Adding Hosts in Trusted Forest

    - by Steve Evans
    I have two forests with a two way trust between them. VMM 2012 sits in ForestA and I can discover hosts in ForestA with no issue. When I try to discover hosts in ForestB I hit one of two issues: If I go through the GUI or use Powershell just like I normally do I get the following error on the job: Error (10407) Virtual Machine Manager could not query Active Directory Domain Services. Recommended Action Verify that the domain name and the credentials, if provided, are correct and then try the operation again. It doesn't matter which account I use. I've tried accounts from both forests, with Admin/Domain Admin permissions all over the place, etc Going through the GUI (can't find the switch in Powershell to duplicate this), I check the box "Skip AD Verification" and it causes the GUI to crash during discovery. I found an article (http://technet.microsoft.com/en-us/library/gg610641.aspx) that describes how to add a host in a disjoint namespace (even though that doesn't apply to me) and it says that VMM creates an SPN if one does not exist. So I verified that the correct SPN's exist in ForestB, that did not help the issue. I have a case open with PSS but they are stuck. I have VMM traces if anyone would like to see them. Any suggestions or ideas?

    Read the article

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • ESXi 5 Guests will not boot

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. Note: Investigation eventually determined that problem was with the VMware client on a Lenovo Edge laptop, the only Windows box available in a Linux IT shop. vSphere Client v4 and v5 duplicated behavior on the Lenovo Edge. As indicated in the comment to the accepted answer, replacing the workstation with one using different video was the "fix" for this particular issue. The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. No errors in the Events tab. Switching Guest from BIOS to EFI makes no difference. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspicious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space. Edit 4: Guests copied from my old ESXi 4 server DO NOT boot on the ESXi 5 system. Initially it complains about too little Video RAM being configured for the default 2500x1600, but it still doesn't work properly even after I bump the Video RAM settings or switch it to Auto-Detect.

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • Port Forwarding to put my web server on The Internet

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • Use a preferred username but authenticate against Kerberos principal

    - by Jason R. Coombs
    What I desire to do should be pretty simple. I have an Ubuntu 10.04 box. It's currently configured to authenticate users against a kerberos realm (EXAMPLE.ORG). There is only one realm in the krb5.conf file and it is the default realm. [libdefaults] default_realm = EXAMPLE.ORG PAM is configured to use the pam_krb5 module, so if a user account is created on the local machine, and that username matches the [email protected] credential, that user may log in by supplying his kerberos password. What I would like to do instead is create a local user account with a different username, but have it always authenticate against the canonical name in the kerberos server. For example, the kerberos principal is [email protected]. I would like to create the local account preferred.name and somehow configure kerberos that when someone attempts to log in as preferred.name, it uses the principal [email protected]. I have tried using the auth_to_local_names in krb5.conf, but this doesn't seem to do the trick. [realms] EXAMPLE.ORG = { auth_to_local_names = { full.name = preferred.name } I have tried adding [email protected] to ~preferred.name/.k5login. In all cases, when I attempt to log in as preferred.name@host and enter the password for full.name, I get Access denied. I even tried using auth_to_local in krb5.conf, but I couldn't get the syntax right. Is it possible to have a (distinct) local username that for all purposes behaves exactly like a matching username does? If so, how is this done?

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Webserver python update script

    - by ThePyCoder
    So i have made this website on which you can trade stocks based on real stock quotes with virtual money. The stock quotes are in a MySQL database and are updated using a python script which runs every minute or so. Now, this works fine on my local machine with xampp but how about moving the project to a commercial web server? Basically I want my page hosted by a professional company but do those kind of servers support python scripts running in the background? Because a dedicated server would be to expensive and the script does some other sql tasks too so it can't be replaced by PHP or so... So, are there any good web hosting services out there who give me the possibility of running a script in the background and hosting a website in the foreground? For what server specifications do i have to look for? Thnx in advance! PS: I've done some research, and I found a python supporting web host WITH ssh support. Is that what I need? Or is the ssh not allowed to start processes?

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >