Search Results

Search found 12836 results on 514 pages for 'host mechanic'.

Page 102/514 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • opennebula 3.4 in debian squeeze

    - by Jin Splif
    hope can get some advise n help.... currently I am installing opennebula 3.4 in debian squeeze everything have being successful where I am able to access the opennebula sunstone webpage localhost:9869 , use one command but when I tried to create a host the status become error... hope someone can assist me on this thanks sample log Monitoring host abc (0) [InM][I]: Command execution fail: 'if [ -x "/var/tmp/one/im/run_probes" ]; then /var/tmp/one/im/run_probes kvm 0 abc; else exit 42; fi' [InM][I]: ssh: Could not resolve hostname abc: Name or service not known [InM][I]: ExitCode: 255 [InM][E]: Error monitoring host 0 : MONITOR FAILURE 0 -

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • XenServer/Center: Shared SRs for hosts not in same pool?

    - by 3molo
    I would like to use the same SRs on XenServer hosts that are not able to be part of the same pool (because of not having the exact same cpu feature set, if I understand it correctly) in order to share templates, being able to (manually) start a host on another node, backing up running hosts on other hardware etc etc. The technology for SR can be any of iSCSI, NFS or CIFS, iSCSI would obviously be preferred. Trying to add an iSCSI volume renders a "This LUN is already in use as SR iSCSI - Shared Storage on pool xxxxxx.". Adding a NFS share on one XS host, creating a template there and then checking another XS host reveals they don't agree on used space etc. Coming from a vSphere world this is quite baffling, but if these are limitations then I will have to rethink some of the concepts for this low budget project.

    Read the article

  • Getting kernel errors when manually mounting VirtualBox shared folders

    - by Ross
    Updated: I've rephrased this problem as I understand it a bit more now (and have encountered another problem). I'm using a Fedora 15 guest on a Windows 7 host using VirtualBox. I am trying to mount a partition on the host PC as a shared folder for use in the guest. The folder appears in /media and is accessible when I use the auto-mount feature when setting up the shared folder, but when I attempt to mount without auto-mount I get the following error: $ sudo mount.vboxsf data /mnt/host_data /sbin/mount.vboxsf: Could not add an entry to the mount table.: Invalid argument In addition a popup appears (part of Fedora/GNOME) reporting a crash in the kernel package: WARNING: at lib/list_debug.c:26 __list_add+0x3e/0x81() However the shared folder seems to work, I can certainly browse it (although everything seems to be executable, probably down to a Windows host). Is there something wrong with what I'm doing or is this a bug (and in which case should it be reported to the Linux Kernel team or VirtualBox)?

    Read the article

  • Hosting several domains on one server using IIS 7

    - by Øyvind Knobloch-Bråthen
    I have created several web sites inside IIS7 on my server. All of them use the same ip and port, but different host names. Currently I have set the host name to www.mydomain.com. Now my question is, how do I get my actual domains to target the different sites on my server. Second question. Can I set my host name to only mydomain.com to make sure that all requests to that domain is handeled by the same application? Primarily, I want both www.mydomain.com and mydomain.com to work when the user types the address in their browser.

    Read the article

  • Bind dns server in Solaris 10 and win xp clients

    - by stevecomptech
    Hi, Added this in zone db file, i am running solaris 10 _ldap._tcp.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.mydomain.com. SRV 0 0 88 dc.mydomain.com. _ldap._tcp.dc._msdcs.mydomain.com. SRV 0 0 389 dc.mydomain.com. _kerberos._tcp.dc._msdcs.mydomain.com. SRV 0 0 88 host.mydomain.com. Now i get this error when i try to join win xp to the domain The query was for the SRV record for _ldap._tcp.dc._msdcs.mydomain.com The following domain controllers were identified by the query: host.mydomain.com Common causes of this error include: Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. Domain controllers registered in DNS are not connected to the network or are not running. What do i need to change in order my win xp join the domain

    Read the article

  • How do you set up DNS in Window Server 2008 in a Hyper-V environment?

    - by Nathan DeWitt
    I have a laptop running Server 2008 and Hyper-V. I have created a virtual machine that is also running Server 2008, that I used dcpromo to create as a domain controller. I disabled IPv6 because I had no idea how to enter a default address, and I just wanted to make a standalone MOSS dev environment. I have tried every combination of creating a virtual network on the host and then connecting to that in the VM, but I can't get the VM to communicate with the host and vice versa. No pinging, no copy and paste, nothing. Thanks. To update: My VM (which is its own DC) currently does not have a static IP. When I set the IP to static, I could not find anything that would let it talk to the host machine.

    Read the article

  • DNS Replication issue

    - by BillN
    We host the DNS for our domain. Two weeks ago, the developer requested that we setup a new zone 'dev.ourdomain.com' and place two host records in it my.dev.ourdomain.com and admin.dev.ourdomain.com. We added the zone to our DNS and added A records for the host. Now a week later, some DNS servers like google (8.8.8.8) and gtei (4.2.2.2) will resolve the hosts, but others like OpenDNS (208.67.222.222 ) and ATT Uverse (68.94.156.1) cannot resolve it. Any Ideas?

    Read the article

  • Ping from windows 7 get no reply but sets errorlevel to 0

    - by Doron
    From a Windows 7 machine, I ping an IP address of a turned-off machine. C:\>ping 192.168.1.222 Pinging 192.168.1.222 with 32 bytes of data: Reply from 192.168.1.222: Destination host unreachable. Reply from 192.168.1.222: Destination host unreachable. Reply from 192.168.1.222: Destination host unreachable. Ping statistics for 192.168.1.222: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss) Even though there is no reply, the errorlevel is set to 0. What I am trying to do, is figure out if a remote machine is replying to ping. One of my tests is to turn off the machine and ping it. For some reason, ping sets errorlevel to 0.

    Read the article

  • How to dynamically set HTTP Header in Apache 2.2?

    - by Michael
    Seems like this should be easy, but I cannot figure out the syntax. In Apache, I want to use the value of an existing request header to set a new request header. Some simple non-working code that illustrates what I'd like to do: RequestHeader set X-Custom-Host-Header "%{HTTP_HOST}e" Ideally, this would make a new HTTP header in the request called "X-Custom-Host-Header" that contains the value of the existing Host header. But it does not. Perhaps I need to copy the existing header into an environment variable first? (If so, I can't figure out how to do that either.) I feel like I'm missing something obvious, but I've gone over the Apache docs and I can't figure it out. Thanks for any help.

    Read the article

  • Two mail servers, need help with dns configuration for the backup one

    - by user92231
    I need to run a redundant backup mail server in case the main one goes down. The settings in GoDaddy look something like the following: A (Host) Host Points to @ ip address of mail1 41.x.x.x mail1 ip address of mail1 41.x.x.x mail2 ip address of mail2 196.x.x.x MX Priority host points to 10 @ mail1.mydomain.com 20 @ mail2.mydomain.com When mail1 goes down, mail2 is able to get emails. I can access it through the browser with no problem, but I want my users to able to pop3/smtp as well without changing anything in their outlook. I dont want any impact to the users when mail1 is down. Also, I'm using the windows server DFS to keep both folders of the mails in sync. Is this the right way, or should I be using something else?

    Read the article

  • Powershell if statement not working - help!

    - by Dmart
    I have a batch file that takes a computer name as its argument and will install Java JRE on it remotely. Now, Im trying to run this Powershell script to repeatedly call the batch file and install Java on any system that it finds without the latest version. It seems to run error-free, but the statements inside the if code block never seem to run - even when the if conditional test evaluates to true. Can anyone look at this script and point out what I'm possibly missing? I'm using the Quest AD cmdlets, and BSOnPosh module. Thank you. get-qadcomputer -sizelimit 0 -name mypc* -searchroot 'OU=MyComputers,DC=MyDomain,DC=lcl'| test-host -property name |ForEach-Object -process { $targnm = $_.name $tststr=reg query "\\$targnm\HKLM\SOFTWARE\JavaSoft\Java Runtime Environment" /v Java6FamilyVersion if(-not ($tststr | select-string -SimpleMatch '1.6.0_20')) { $mssg="Updating to JRE 6u20 on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append cmd /c \\server\apps\java\installjreremote.cmd $targnm } else { $mssg ="JRE 6u20 found on $targnm" Out-Host $mssg Out-File -filepath c:\install_jre_log.txt -inputobject $mssg -Append } }

    Read the article

  • using wbadmin to backup and recover

    - by g7rpo
    HI I am using wbadmin to perform backups of a specific folder, primarily to backup my VHD files this is working fine but I tried to recover the files today using a different machine to the one which created the backup and couldnt get the machine doing the recovery to 'see' the backups. Is there a way to do this as my worry is that if I have a failure on the host which is perfmorming the backups I need to be able to install hyper-v on another host and recover the backed up VMs to there until I can rebuild the host. It appears that this isnt possible, I am hoping I am missing something. Any help would be greatly appreciated.

    Read the article

  • What would be the best way to correlate logs and events on several hosts?

    - by user220746
    I'm trying to build a log correlation system on multiple hosts. SEC seems interesting but I don't know if it will cover my needs. How could I correlate system events, logs, network events, etc. on multiple hosts at the same time, in real time? Examples: If 5 failed logins happened on host A the last minute and if firewall B has denied lots of access on differents ports on A, then we assume there is a potential attack in progress on A. If the Apache service on host A didn't receive any request for the last N minutes and Apache service on host B did, then the load balancing could be faulty.

    Read the article

  • TFTP PUT Failing Across Hosts

    - by Jason
    I have a TFTP server installed on a CentOS host. /etc/xinetd.d/tftp: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -c -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4 } If I try to PUT a file from a remote host to the host running the TFTP server, I get Transfer Timed Out - however, it does create the file in /var/lib/tftpboot but the file is empty. If I tftp from the tftp server to itself (localhost) and PUT a file, it works fine. I have verified that SELinux is disabled and IPTables are turned off. I can connect from the remote hosts with no issue - just seems to be the PUT I have issue with: [root@SVR01 TEST]# tftp 10.100.2.15 tftp> status Connected to 10.100.2.15. Mode: netascii Verbose: off Tracing: off Literal: off Rexmt-interval: 5 seconds, Max-timeout: 25 seconds tftp>

    Read the article

  • Backup virtual hard disk

    - by Harshil Sharma
    I have a VM created in VMWare Player. It's VHD is currently sized 17 GB, split among multiple 2 GB files. The host OS is Windows 8. I use CrashPlan in host OS for file backup. The problem is, whenever I use the VM, CrashPlan detects all parth of VHD as altered and backs up the 17 GB VHD. WHat I want is a software that can run on host OS (Windows 8), treat the VHD as a physical hard disk and create incremental backups of the VHD, includeing all files, programs and the OS

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • cygWin connect by SSH using RSA key; ssh.exe couldn't create /home/user/.ssh

    - by Kirzilla
    I'm using Win XP and I'm trying to connect by SSH to remote host using RSA key. I've investigated that cygWin recognizes Documents and Settings dir as home directory Z:\app\cwRsync\bin>cygpath -H /cygdrive/c/Documents and Settings I've created .ssh directory in Documents and Settings/user/.ssh and moved known_hosts, id_rsa, id_rsa.pub there. Now, I'm trying to connect via ssh.exe to remote host Z:\app\cwRsync\bin>ssh -p 22 [email protected] Could not create directory '/home/user/.ssh'. The authenticity of host '[remotehost.com]:22 ([remotehost.com]:22)' can't be established. RSA key fingerprint is f7:f4:2c:e0:c6:7e:d2:a4:45:70:63:df:bf:f2:84:46. Are you sure you want to continue connecting (yes/no)? What I'm doing wrong? Why ssh.exe couldn't create directory /home/user/.ssh? Thank you.

    Read the article

  • HTTP Redirect from www.mydomain.com to my amazon ec2 account (instance)?

    - by fabius
    Hello! I have a domain, that is registered at a service provider but my site (wordpress blog) is hosted in a shared account with a friend in another other host service. I want to become seperate from this friend because I'm tired of boring him with my blog downtimes. Now, my problem is that I signed up to Amazon EC2 service and I created a instance (a virtual machine) to host my wordpress blog and now I'd like to redirect mydomain.com to this instance at Amazon EC2 and I don't know how to proceed in order to achieve that. The instance at Amazon EC2 is up and running (it's a 64bit linux machine) but I couldn't redirect mydomain.com to this instance at my host service webpanel. Could someone help me please???

    Read the article

  • Error mounting samba share, cannot mount block device xxxx read-only

    - by Jeff Ward
    After installing Ubuntu 12.04, I'm trying to mount a samba share from Windows under Linux, using a scripted command that's always worked, and the server hasn't changed. The error is as follows: $ mount -t cifs //<host>/<share> /media/<share> -o username=<user>,password=<pass> mount: block device //<host>/<share> is write-protected, mounting read-only mount: cannot mount block device //<host>/<share> read-only $ I've read a lot of discussions about permissions, but unfortunately, that wasn't the issue. I'm submitting my own answer below for reference, hope this helps someone else.

    Read the article

  • Remote file copy util (like rsync) but that will take account of data already copied (in this sessio

    - by Rory McCann
    Let's say I have a directory with 2 files, both are identical and quite large (e.g. 2GB ea.) I want to rsync that directory to a remote host. As I understand it (and I could be wrong), rsync calculates checksums of files. Surely if it sees 2 files with the same checksum it can just copy the first file, then do a local copy on the remote host for the 2nd file? That would make it faster, no? On a similar note, doesn't rsync hash all the remote files before copying? If it saw a different file with the same hash as a file that was to transfered, it could do a local copy on the remote host. Does rsync support this sort of thing? Is there some way to turn it on? Is there a tool similar to rsync that will do this sort of 'hash based' local copies?

    Read the article

  • How to connect 2 virtual hosts running on the same machine?

    - by Gabrielle
    I have 2 virtual hosts running on my Windows XP laptop. One is Ubuntu running inside vmware player. The other is MS virtual PC (so I can test with IE6 ). The Ubuntu virtual host is running my web application with apache. I can point my browser on my laptop at the Ubuntu IP and view my web app. I read this post http://stackoverflow.com/questions/197792/how-to-connect-to-host-machine-from-within-virtual-pc-image and was able to get my Virtual PC to ping my physical machine using the loopback adapter. But I'm stuck on getting my Virtual PC to see my web application running in the Ubuntu vmware player host. I appreciate any suggestions.

    Read the article

  • How can I install/uninstall VMware Workstation silently in Linux?

    - by Landy
    I want to install VMware Workstation in Windows and Linux host silently. In Windows host, I can use the silent installation features of the Microsoft Windows Installer. But I didn't find a way to perform silent install/uninstall in Linux host. I had to click "I agree" or "Close" to make the installation continue or complete. If there has a Workstation installed, I can use vmware-installer -i path to installation file" --console --required to update the Workstation to a different version. But this command only exists after a Workstation has been installed.

    Read the article

  • Why do servers go down after a lot of traffic?

    - by mohabitar
    I'm working on an iOS app that makes extensive use of databases, where users will be able to sync their data to a server. However, I'm terrified of the event that if too many users start using the app, the servers will no longer be able to handle it. I'm not a server guy at all and am not too familiar with how that works, but my question is, why do servers get overloaded and how can that be prevented? Does it have to do with who my server host is? Or is it about the efficiency of my code? If my host is a reliable server, such as Amazon AWS, am I still at risk for server problems? Bottom line is, does it have to do with the way I implement my code, or does it have to do with who my host is?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >