Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 128/1664 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • XenServer and ZFS via NFS

    - by Jeroen Jacobs
    I'm trying to connect a NFS share to XenCenter. The NFS server is a ZFSGuru distro (uses FreeBSD). The zfs volume was exported like this: /sbin/zfs set sharenfs="on" temppool/share According to "showmount", it's available: showmount -e /temppool/share Everyone However, when I try to connect to it with XenServer (so it can be used as storage for VHD), I get the following error: Internal error:Failure("Storage_access failed with: SR_BACKEND_FAILURE_73: [; NFS mount error[opterr=mount failed with return code 32]; ]") Anyone got an idea? Update: This is from the log on the NFS server: Sep 3 16:23:10 zfsguru mountd[962]: mount request from 192.168.10.217 for non e xistent path /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:23:10 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:23:11 zfsguru mountd[962]: mount request from 192.168.10.217 for non e xistent path /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:23:11 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/7c8d3f2f-e0e0-5263-ccad-1cd32a4139cf Sep 3 16:28:20 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/17922178-0dfb-edf3-0037-2eddd79b9d02 Sep 3 16:28:43 zfsguru last message repeated 5 times Sep 3 16:35:00 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/b5735ccf-1997-8d77-83a0-2f34e37dda8d Sep 3 16:35:33 zfsguru last message repeated 4 times Sep 3 16:35:34 zfsguru mountd[962]: mount request denied from 192.168.10.217 fo r /temppool/share/b5735ccf-1997-8d77-83a0-2f34e37dda8d It seems XenServer is able to create the directories, but is enable to mount them afterwards.

    Read the article

  • Proliant RAID 1 Rebuild Questions

    - by Nicholas
    I have a HP Proliant ML350 G5 server that experienced a power supply failure overnight. The power supply was replaced but unfortunately it got restarted with only 1 disk in the RAID 1 set plugged in. (The raid controller is the build in E200i). The raid BIOS then said on start-up that it had entered Interim Recovery Mode. However I would have expected it to still start up with only the 1 drive. The bios however says that it cannot find a C: drive and enters a reboot loop polling the other boot devices. First question is, is this normal behaviour not to start up on 1 disk? The second drive was then plugged in (all drives are ok) and the raid bios started an automatic rebuild on that disk. This appears to be a background process as there is no progress shown. However based on the light flashing it looks like it is working. My second question is how long will this rebuild take? (36GB 15K SAS drive). I cannot see any error messages and it looks like it is rebuilding the drive ok, but the computer still will not start-up. It still says during the boot up process that the C: drive is not found. If I wait for the rebuild to finish, is it likely to fix itself and find the C: drive? Or is there some other problem here?

    Read the article

  • Running two wsgi applications on the same server gdal org exception with apache2/modwsgi

    - by monkut
    I'm trying to run two wsgi applications, one django and the other tilestache using the same server. The tilestache server accesses the db via django to query the db. In the process of serving tiles it performs a transform on the incoming bbox, and in this process hit's the following error. The transform works without error for the specific bbox polygon when run manually from the python shell: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 325, in __call__ mimetype, content = requestHandler(self.config, environ['PATH_INFO'], environ['QUERY_STRING']) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 231, in requestHandler mimetype, content = getTile(layer, coord, extension) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 84, in getTile tile = layer.render(coord, format) File "/usr/lib/python2.7/dist-packages/TileStache/Core.py", line 295, in render tile = provider.renderArea(width, height, srs, xmin, ymin, xmax, ymax, coord.zoom) File "/var/www/tileserver/providers.py", line 59, in renderArea bbox.transform(METERS_SRID) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/geos/geometry.py", line 520, in transform g = gdal.OGRGeometry(self.wkb, srid) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 131, in __init__ self.__class__ = GEO_CLASSES[self.geom_type.num] File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 245, in geom_type return OGRGeomType(capi.get_geom_type(self.ptr)) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geomtype.py", line 43, in __init__ raise OGRException('Invalid OGR Integer Type: %d' % type_input) OGRException: Invalid OGR Integer Type: 1987180391 I think I've hit the non thread safe issue with GDAL, metioned on the django site. Is there a way I could configure this so that it would work? Apache Version: Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured Apache apache2/sites-available/default: <VirtualHost *:80> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess lbs processes=2 maximum-requests=500 threads=1 WSGIProcessGroup lbs WSGIScriptAlias / /var/www/bin/apache/django.wsgi Alias /static /var/www/lbs/static/ </VirtualHost> <VirtualHost *:8080> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess tilestache processes=1 maximum-requests=500 threads=1 WSGIProcessGroup tilestache WSGIScriptAlias / /var/www/bin/tileserver/tilestache.wsgi </VirtualHost> Django Version: 1.4 httpd.conf: Listen 8080 NameVirtualHost *:8080 UPDATE I've added the a test.wsgi script to determine if the GLOBAL interpreter setting is correct, as mentioned by graham and described here: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Sub_Interpreter_Being_Used It seems to show the expected result: [Tue Aug 14 10:32:01 2012] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured -- resuming normal operations [Tue Aug 14 10:32:01 2012] [info] mod_wsgi (pid=29891): Attach interpreter ''. I've worked around the issue for now by changing the srs used in the db so that the transform is unnecessary in tilestache app. I don't understand why the transform() method, when called in the django app works, but then in the tilestache app fails. tilestache.wsgi #!/usr/bin/python import os import time import sys import TileStache current_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.realpath(os.path.join(current_dir, "..", "..")) sys.path.append(project_dir) sys.path.append(current_dir) os.environ['DJANGO_SETTINGS_MODULE'] = 'bin.settings' sys.stdout = sys.stderr # wait for the apache django lbs server to start up, # --> in order to retrieve the tilestache cfg time.sleep(2) tilestache_config_url = "http://127.0.0.1/tilestache/config/" application = TileStache.WSGITileServer(tilestache_config_url) UPDATE 2 So it turned out I did need to use a projection other than the google (900913) one in the db. So my previous workaround failed. While I'd like to fix this issue, I decided to work around the issue this type by making a django view that performs the transform needed. So now tilestache requests the data through the django app and not internally.

    Read the article

  • Is anyone using KVM in production?

    - by Andy Shellam
    I've been trying to set up a pair of servers utilising KVM on Ubuntu 9.10 to host 8 virtual machines between them and ended up with various issues from the VMs freezing, to not powering on. I had one virtual server set up and running and was setting up a second, when any operation involving OpenSSL would cause the VM to lock up in a weird way - all network traffic would cease, it wouldn't process logins on the console, but it wasn't taking any CPU time off the host. The first virtual server was identical and worked perfectly. Another VM I tried to setup had installed Ubuntu fine then refused to reboot, throwing kernel exceptions to do with XFS. I've now installed Citrix XenServer 5.5 on both hosts, and am now setting up my third VM with absolutely no issues. I also had the same experience when I tried VMware, but I preferred Xen as it appears to give more features on the free license. My question is am I just unlucky with KVM, or is KVM as unstable as it appears? Are you using, or planning on using, KVM in production, and how successful have you been?

    Read the article

  • Running nph-script.cgi keeps outputting Server details at the end

    - by wgewweg
    I am running a nph-script.cgi on my server. The server keeps adding HTTP/1.1 200 OK Date: Thu, 05 Nov 2009 02:28:53 GMT Server: Apache/2.2.8 (Ubuntu) PHP/5.2.8-1hardy~ppa1 with Suhosin-Patch mod_perl/2.0.3 Perl/v5.8.8 Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain X-Pad: avoid browser bug At the bottom of each page loaded via the .cgi script. why is this the case? How do I remove this annoying message that is appended to all pages ?

    Read the article

  • Missing NIC and USB devices

    - by MJ
    Coming into work today, I've found we have a fwe different computers (different companies/networks/OS versions - all windows based) that are all having the same issue. 1) Network NIC is not able to be viewed from network connections. If you refresh, its saying the service is not started. Services state the service is started and running. 2) USB devices are not recognized when plugged in, scan for hardware changes, etc. We have managed AV, that is kept up to date, and a managed patch policy that has all these machines at the most recent patch. I'm just wondering if anyone else has experienced these same symptoms, and what they have done to resolve them.

    Read the article

  • Cannot delete old NFS directory: Device or resource busy

    - by Jakobud
    On server1, we had an NFS share mounted from server 2 like this: /nfs/server2/share Recently, we took down server2 to install a new OS on it. Now we can't get NFS setup the way it was. When I do this: ls -l /nfs I get this: drwxr-xr-x 2 root root 0 2010-03-15 09:59 server2 Notice how the directory size is 0 instead of 4096 like usual? Anyways I go into server2 expecting to see a share directory, but I don't. It's empty. So therefore I cannot mount my share at /nfs/server2/share. When I try to create /nfs/server2/share directory, I get mkdir: cannot create directory `share': No such file or directory I think this is because it doesn't really think the /nfs/server2 directory really exists. Even if I use the -p option with mkdir, it doesn't work. Next I tried to remove /nfs/server2 so I could just recreate it. I try to rm -r /nfs/server2 but I get rm: cannot remove directory `/nfs/server2': Device or resource busy So now I'm at a loss. I need to mount this NFS share in the same exact place on server1 (at /nfs/server2/share) because other software on server1 depend on this. But if I can't create that share directory and I can't remove that directory, what do I do? Also, just for testing, I attempted to mount the share at /nfs/testing/share and it mounted just fine. But like I said, I need to mount it back in the same location.

    Read the article

  • Setting up a network e-mail server

    - by Jason
    Hello, My boss just asked me to buy a new server for our office network. I know next to nothing about servers and networking, so I need someone to point me in the right direction. He said he wants this to be our e-mail server with a network login. I have no idea how to set up an e-mail server, especially one that sends/receives e-mail using our domain name. We use a terrible piece of order/inventory software called Mail Order Manager (MOM). Our computers currently connect to the MOM database through a networked drive. My boss would like to move away from this peer-to-peer MOM setup. The software publisher offers a SQL version of MOM, but it's way overpriced. Is there a better way to connect to these databases without using the SQL version? Finally, the server needs to be running Windows. Does this question make sense, is it possible, and can someone help me get started? Thanks!

    Read the article

  • Linking Linux MIT Kerberos with a Windows 2003 Active Directory

    - by Beerdude26
    Greetings, I was wondering how one might link a Linux MIT Kerberos with a Windows 2003 Active Directory to achieve the following: A user, [email protected], attempts to log in at an Apache website, which runs on the same server as the Linux MIT Kerberos. The Apache module first asks the local Linux MIT Kerberos if he knows a user by that name or realm. The MIT Kerberos finds out it isn't responsible for that realm, and forwards the request to the Windows 2003 Active Directory. The Windows 2003 Active Directory replies positively and gives this information to the Linux MIT Kerberos, which in turn tells this to the Apache module, which grants the user access to its files. Here is an image of the situation: http://img179.imageshack.us/img179/5092/linux2k3.png (I'm not allowed to embed images just yet.) The documentation I have read concerning this issue often differ from this problem: Some discuss linking up a MIT Kerberos with an Active Directory to gain access to resources on the Active Directory server; While another uses the link to authenticate Windows users to the MIT Kerberos through the Windows 2003 Active Directory. (My problem is the other way around.) So what my question boils down to, is this: Is it possible to have a Linux MIT Kerberos server pass through requests for a Active Directory realm, and then have it receive the reply and give it to the requesting service? (Although it's not a problem if the requesting service and the Windows 2003 Active Directory communicate directly.) Suggestions and constructive criticism are greatly appreciated. :)

    Read the article

  • Lighttpd getting 403 forbidden page

    - by Ramesh
    i have newly installed lighttpd in ubuntu 9.10 first it showed the detault page and i changed the permission of /var/www/ directory to 777 and now its saying 404 forbidden my php-cgi -v PHP 5.2.10-2ubuntu6.4 with Suhosin-Patch 0.9.7 (cgi-fcgi) (built: Jan 6 2010 22:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies php -v PHP 5.2.10-2ubuntu6.4 with Suhosin-Patch 0.9.7 (cli) (built: J 6) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies and i have added these line in lighttpd.conf file fastcgi.server = ( ".php" = (( "bin-path" = "/usr/bin/php-cgi", "socket" = "/tmp/php.socket" ))) still getting same error....

    Read the article

  • Open vSwitch and Xen Private Networks

    - by Joe
    I've read about the possibilities of using Open vSwitch with Xen to route traffic between domUs on multiple physical hosts. I'd like to be able to group the multiple domUs I have spread out across multiple physical hosts into a number of private networks. However, I've found no documentation on how to integrate Open vSwitch with Xen (rather than XenServer) and am unsure how I should go about doing so and then creating the private networks described. As you might have gathered then - from research I think Open vSwitch can do what I need it to, but I just can't find anything giving me a push in the right direction of how to actually use it to do so! This may well be because Open vSwitch is quite new (version 1.0 released on May 17). Any pointers in the right direction would be much appreciated!

    Read the article

  • Wire VMWare Player NIC to a VLAN in Ubuntu 8.04.3

    - by Sophie Charlesworth
    I've got VMWare Player 2.5.x installed on a Ubuntu 8.04.3 host running CentOS 5.3 running Cobbler. VMWare Player has two NICs (I actually took this image from an ESXi image, converted it to Player 2.x image via VMWare Standalone Converter). I've also setup a vlan (vlan5) on the host with 10.0.0.x and I'd like Cobbler to use that VLAN to serve any incoming requests. How do I wire up my VMWare to use the VLAN I've setup? Just one of the NICs. What I'm trying to do is to offer a laptop with a VM that our sysadmins can go, plug it into a box (which does not connect to the interwebs) and install RHEL images via cobbler. So essentially, its a cross over cable from the network port on the lappy to the Dell server box. PXE boot in the dell box and install RHEL. I have the cobbler working fine under VMWare ESXi but not so on the VMWare Player because of the VLAN issue - I think. Any ideas?

    Read the article

  • Samba: session setup failed: NT_STATUS_LOGON_FAILURE

    - by stivlo
    I tried to set up Samba with "unix password sync", but I still get logon failure. I am running Ubuntu Natty Narwhal. $ smbclient -L localhost Enter stivlo's password: session setup failed: NT_STATUS_LOGON_FAILURE Here is my /etc/samba/smb.conf [global] workgroup = obliquid server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user [www] path = /var/www browsable = yes read only = no create mask = 0755 After modifying I restarted the servers: $ sudo restart smbd $ sudo restart nmbd However I still can't logon with my Unix username and password. Can anyone please help? Thank you in advance!

    Read the article

  • Uploadify Flash Uploader and Random UPLOAD_ERR_CANT_WRITE errors

    - by dcneiner
    I am using Uploadify to provide progress bar support for file uploads on a PHP app I built. It works perfectly for a few uploads,then every few uploads it fails and the data from the $_FILES array reveals an UPLOAD_ERR_CANT_WRITE error. (Error code 7). I ran Paros proxy between my browser and the server to see the difference between a passing and failing request. The only difference was the content separator for the multi-part post which changes every time. I would conclude this was fully a server error, except with a plain jane form, I cannot reproduce the error. I am not a server guy, so please let me know what information is needed to troubleshoot this and I will update the question with those details. I did place these lines in the .htaccess, but to know avail. The site is hosted on Rackspace Cloudsites so my configuration options are limited: <IfModule mod_security.c> SecFilterEngine Off SecFilterScanPOST Off </IfModule> php_value upload_max_filesize 10M php_value post_max_size 10M php_value max_execution_time 200 php_value max_input_time 200

    Read the article

  • How do you organise the cables in your racks?

    - by Tim
    I'm migrating my current half-size rack to a full-size rack and want to take the opportunity to reorganize and sort our spaghetti-hell of ethernet cables. What system do you use for organising your cables? Do you use any tracking software? Do you physically label the cables? What are you identifying when you label each end? Mac address? Port number? Asset number? What do you use to label them? I was looking at a hand held labeler, but the wrap around laser printer sheets might work. The Brady ID PAL seems good, but it's pricey. Ideas?

    Read the article

  • Reverse ssh tunneling with Tomato

    - by Deivuh
    Since my ISP restricts some incoming connections, I can't access remotely to my home pc. What I'm trying to do is make a reverse ssh connection from my home's router with Tomato firmware to the office computer, so I can access remotely from the office with that open connection. What I'm doing is running the following from the router: ssh -R 12345:localhost:22 oUser@office Then I leave run top open to keep the connection alive. And from my office what I do is run the following: ssh hUser@localhost -p 12345 but I get the following message with verbose on: OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [::1] port 19999. debug1: Connection established. debug1: identity file /home/oUser/.ssh/id_rsa type -1 debug1: identity file /home/oUer/.ssh/id_rsa-cert type -1 debug1: identity file /home/oUser/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/oUser/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host I've password remote access enabled in Tomato's configuration, so I should be able to access without having the public key on *authorized_keys*, but I've even tried adding it and still the same. I've done the same with my home's computer, and it does work perfectly, but it doesn't with the router. Am I doing something wrong? Thanks in advance.

    Read the article

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • We have no SW Firewall behind our office HW firewall, admin says its not req'd

    - by Makach
    I've recently changed jobs and I've been set up with a new workstation. On all previous places where I've been working they've had some sort of local firewall installed on each and every workstation - but here I've been told not to activate it because it is not necessary since we're already behind a HW Firewall. To me this seem a bit naïve, but I cannot emphasise it. I always thought a local firewall was good practice, ie. if something managed to come through the hw firewall there might be a slight chance other computers on the lan would block the internal threath. We got free access to internet and we got a virus checker installed.

    Read the article

  • Reverse Proxy FTP traffic

    - by TonyZ
    I was wondering if anyone knew of a reverse proxy server to reverse proxy ftp traffic. I would like to run many servers on ip address, but then pass the traffic to an internal server with its own ip address. Thank you for any suggestions.

    Read the article

  • Why Hebrew letters in the address bar break the ARR gateway (Only With Explorer 8,9,10)?

    - by Noamway
    The ARR is working great in all browsers except Internet Explorer 8,9,10. When I paste Hebrew URL directly to the address bar it's working good, but when I surf (click on a simple href URL) from one Hebrew URL page to another Hebrew URL the ARR return me that error: "502 - Web server received an invalid response while acting as a gateway or proxy server." There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. I checked it number of times including with HTTP analyzer and I saw that the "referer" is making all the problems and cause to that error. For example when I enter to that page: mydomain.com/somehebrewchars (mydomain.com/???? you will need Hebrew install) And click in the page on a link to: mydomain.com/somehebrewchars2 (mydomain.com/???????? you will need Hebrew install) I will get the error above and when you look at the referrer you will see something like that: mydomain.com/עמוד-× ×—×™×ª×” We use other proxies application to others projects and we don't have the same issue like that. For this example we used WIN 2008 and 2012 with ARR 2.5 and also 3 beta. Any help is welcome :-) Thanks, Noam

    Read the article

  • Wire VMWare Player NIC to a VLAN in Ubuntu 8.04.3

    - by Sophie Charlesworth
    Hi, I've got VMWare Player 2.5.x installed on a Ubuntu 8.04.3 host running CentOS 5.3 running Cobbler. VMWare Player has two NICs (I actually took this image from an ESXi image, converted it to Player 2.x image via VMWare Standalone Converter). I've also setup a vlan (vlan5) on the host with 10.0.0.x and I'd like Cobbler to use that VLAN to serve any incoming requests. How do I wire up my VMWare to use the VLAN I've setup? Just one of the NICs. What I'm trying to do is to offer a laptop with a VM that our sysadmins can go, plug it into a box (which does not connect to the interwebs) and install RHEL images via cobbler. So essentially, its a cross over cable from the network port on the lappy to the Dell server box. PXE boot in the dell box and install RHEL. I have the cobbler working fine under VMWare ESXi but not so on the VMWare Player because of the VLAN issue - I think. Any ideas?

    Read the article

  • How do I identify and fix the cause of transaction log growth on SIMPLE recovery model databases?

    - by Stuart B
    I recently upgraded our SQL Server 2008 installations to service pack 2. One of our databases is on the simple recovery model, but its transaction log is growing extremely fast. The path I'm currently investigating is that we have a transaction somewhere out there stuck in active state. Here is why: select name, recovery_model_desc, log_reuse_wait_desc from sys.databases where name in ('SimpleDB') name recovery_model_desc log_reuse_wait_desc SimpleDB SIMPLE ACTIVE_TRANSACTION When I check my active transactions, I get the following. Note that I installed SP2 and restarted our server on 12/25 at around noonish. select transaction_id, name, transaction_begin_time, transaction_type from sys.dm_tran_active_transactions transaction_id name transaction_begin_time transaction_type 233 worktable 2010-12-25 12:44:29.283 2 236 worktable 2010-12-25 12:44:29.283 2 238 worktable 2010-12-25 12:44:29.283 2 240 worktable 2010-12-25 12:44:29.283 2 243 worktable 2010-12-25 12:44:29.283 2 245 worktable 2010-12-25 12:44:29.283 2 62210 tran_sp_MScreate_peer_tables 2010-12-25 12:45:00.880 1 55422856 user_transaction 2010-12-28 16:41:56.703 1 55422889 SELECT 2010-12-28 16:41:57.303 2 470 LobStorageProviderSession 2010-12-25 12:44:30.510 2 Note that according to the documentation a transaction_type of 1 means read/write, and 2 means read-only. So, my line of thinking is that the trans_sp_MScreate_peer_tables transaction is stuck for some reason and holding up transaction log truncation. Is this a plausible scenario? Correct me if my line of thinking is off, as I'm not a SQL Server expert. If this is correct, how do I erase that transaction so that my transaction log is truncated as usual?

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >